path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
yet_another_week/seminar_MCTS.ipynb | ###Markdown
Seminar: Monte-carlo tree searchIn this seminar, we'll implement a vanilla MCTS planning and use it to solve some Gym envs.But before we do that, we first need to modify gym env to allow saving and loading game states to facilitate backtracking.
###Code
from gym.core import Wrapper
from pickle import dumps,loads
from collections import namedtuple
#a container for get_result function below. Works just like tuple, but prettier
ActionResult = namedtuple("action_result",("snapshot","observation","reward","is_done","info"))
class WithSnapshots(Wrapper):
"""
Creates a wrapper that supports saving and loading environemnt states.
Required for planning algorithms.
This class will have access to the core environment as self.env, e.g.:
- self.env.reset() #reset original env
- self.env.ale.cloneState() #make snapshot for atari. load with .restoreState()
- ...
You can also use reset, step and render directly for convenience.
- s, r, done, _ = self.step(action) #step, same as self.env.step(action)
- self.render(close=True) #close window, same as self.env.render(close=True)
"""
def get_snapshot(self):
"""
:returns: environment state that can be loaded with load_snapshot
Snapshots guarantee same env behaviour each time they are loaded.
Warning! Snapshots can be arbitrary things (strings, integers, json, tuples)
Don't count on them being pickle strings when implementing MCTS.
Developer Note: Make sure the object you return will not be affected by
anything that happens to the environment after it's saved.
You shouldn't, for example, return self.env.
In case of doubt, use pickle.dumps or deepcopy.
"""
self.render(close=True) #close popup windows since we can't pickle them
return dumps(self.env)
def load_snapshot(self,snapshot):
"""
Loads snapshot as current env state.
Should not change snapshot inplace (in case of doubt, deepcopy).
"""
assert not hasattr(self,"_monitor") or hasattr(self.env,"_monitor"), "can't backtrack while recording"
self.render(close=True) #close popup windows since we can't load into them
self.env = loads(snapshot)
def get_result(self,snapshot,action):
"""
A convenience function that
- loads snapshot,
- commits action via self.step,
- and takes snapshot again :)
:returns: next snapshot, next_observation, reward, is_done, info
Basically it returns next snapshot and everything that env.step would have returned.
"""
<your code here load,commit,take snapshot>
return ActionResult(<next_snapshot>, #fill in the variables
<next_observation>,
<reward>, <is_done>, <info>)
###Output
_____no_output_____
###Markdown
try out snapshots:
###Code
#make env
env = WithSnapshots(gym.make("CartPole-v0"))
env.reset()
n_actions = env.action_space.n
print("initial_state:")
plt.imshow(env.render('rgb_array'))
#create first snapshot
snap0 = env.get_snapshot()
#play without making snapshots (faster)
while True:
is_done = env.step(env.action_space.sample())[2]
if is_done:
print("Whoops! We died!")
break
print("final state:")
plt.imshow(env.render('rgb_array'))
plt.show()
#reload initial state
env.load_snapshot(snap0)
print("\n\nAfter loading snapshot")
plt.imshow(env.render('rgb_array'))
plt.show()
#get outcome (snapshot, observation, reward, is_done, info)
res = env.get_result(snap0,env.action_space.sample())
snap1, observation, reward = res[:3]
#second step
res2 = env.get_result(snap1,env.action_space.sample())
###Output
_____no_output_____
###Markdown
MCTS: Monte-Carlo tree searchIn this section, we'll implement the vanilla MCTS algorithm with UCB1-based node selection.We will start by implementing the `Node` class - a simple class that acts like MCTS node and supports some of the MCTS algorithm steps.This MCTS implementation makes some assumptions about the environment, you can find those _in the notes section at the end of the notebook_.
###Code
assert isinstance(env,WithSnapshots)
class Node:
""" a tree node for MCTS """
#metadata:
parent = None #parent Node
value_sum = 0. #sum of state values from all visits (numerator)
times_visited = 0 #counter of visits (denominator)
def __init__(self,parent,action,):
"""
Creates and empty node with no children.
Does so by commiting an action and recording outcome.
:param parent: parent Node
:param action: action to commit from parent Node
"""
self.parent = parent
self.action = action
self.children = set() #set of child nodes
#get action outcome and save it
res = env.get_result(parent.snapshot,action)
self.snapshot,self.observation,self.immediate_reward,self.is_done,_ = res
def is_leaf(self):
return len(self.children)==0
def is_root(self):
return self.parent is None
def get_mean_value(self):
return self.value_sum / self.times_visited if self.times_visited !=0 else 0
def ucb_score(self,scale=10,max_value=1e100):
"""
Computes ucb1 upper bound using current value and visit counts for node and it's parent.
:param scale: Multiplies upper bound by that. From hoeffding inequality, assumes reward range to be [0,scale].
:param max_value: a value that represents infinity (for unvisited nodes)
"""
if self.times_visited == 0:
return max_value
#compute ucb-1 additive component (to be added to mean value)
#hint: you can use self.parent.times_visited for N times node was considered,
# and self.times_visited for n times it was visited
U = <your code here>
return self.get_mean_value() + scale*U
#MCTS steps
def select_best_leaf(self):
"""
Picks the leaf with highest priority to expand
Does so by recursively picking nodes with best UCB-1 score until it reaches the leaf.
"""
if self.is_leaf():
return self
children = self.children
best_child = <select best child node in terms of node.ucb_score()>
return best_child.select_best_leaf()
def expand(self):
"""
Expands the current node by creating all possible child nodes.
Then returns one of those children.
"""
assert not self.is_done, "can't expand from terminal state"
for action in range(n_actions):
self.children.add(Node(self,action))
return self.select_best_leaf()
def rollout(self,t_max=10**4):
"""
Play the game from this state to the end (done) or for t_max steps.
On each step, pick action at random (hint: env.action_space.sample()).
Compute sum of rewards from current state till
Note 1: use env.action_space.sample() for random action
Note 2: if node is terminal (self.is_done is True), just return 0
"""
#set env into the appropriate state
env.load_snapshot(self.snapshot)
obs = self.observation
is_done = self.is_done
<your code here - rollout and compute reward>
return rollout_reward
def propagate(self,child_value):
"""
Uses child value (sum of rewards) to update parents recursively.
"""
#compute node value
my_value = self.immediate_reward + child_value
#update value_sum and times_visited
self.value_sum+=my_value
self.times_visited+=1
#propagate upwards
if not self.is_root():
self.parent.propagate(my_value)
def safe_delete(self):
"""safe delete to prevent memory leak in some python versions"""
del self.parent
for child in self.children:
child.safe_delete()
del child
class Root(Node):
def __init__(self,snapshot,observation):
"""
creates special node that acts like tree root
:snapshot: snapshot (from env.get_snapshot) to start planning from
:observation: last environment observation
"""
self.parent = self.action = None
self.children = set() #set of child nodes
#root: load snapshot and observation
self.snapshot = snapshot
self.observation = observation
self.immediate_reward = 0
self.is_done=False
@staticmethod
def from_node(node):
"""initializes node as root"""
root = Root(node.snapshot,node.observation)
#copy data
copied_fields = ["value_sum","times_visited","children","is_done"]
for field in copied_fields:
setattr(root,field,getattr(node,field))
return root
###Output
_____no_output_____
###Markdown
Main MCTS loopWith all we implemented, MCTS boils down to a trivial piece of code.
###Code
def plan_mcts(root,n_iters=10):
"""
builds tree with monte-carlo tree search for n_iters iterations
:param root: tree node to plan from
:param n_iters: how many select-expand-simulate-propagete loops to make
"""
for _ in range(n_iters):
node = <select best leaf>
if node.is_done:
node.propagate(0)
else: #node is not terminal
<expand-simulate-propagate loop>
env.reset()
###Output
_____no_output_____
###Markdown
Plan and executeIn this section, we use the MCTS implementation to find optimal policy.
###Code
root_observation = env.reset()
root_snapshot = env.get_snapshot()
root = Root(root_snapshot,root_observation)
#plan from root:
plan_mcts(root,n_iters=1000)
from IPython.display import clear_output
from itertools import count
from gym.wrappers import Monitor
total_reward = 0 #sum of rewards
test_env = loads(root_snapshot) #env used to show progress
for i in count():
#get best child
best_child = <select child with highest mean reward>
#take action
s,r,done,_ = test_env.step(best_child.action)
#show image
clear_output(True)
plt.title("step %i"%i)
plt.imshow(test_env.render('rgb_array'))
plt.show()
total_reward += r
if done:
print("Finished with reward = ",total_reward)
break
#discard unrealized part of the tree [because not every child matters :(]
for child in root.children:
if child != best_child:
child.safe_delete()
#declare best child a new root
root = Root.from_node(best_child)
assert not root.is_leaf(), "We ran out of tree! Need more planning! Try growing tree right inside the loop."
#you may want to expand tree here
#<your code here>
###Output
_____no_output_____ |
homework/homework_4/Homework #4-v3.ipynb | ###Markdown
Homework 4: Let's simulate a microscopeDue Date: Friday, April 5th at midnightThe goal of this homework assignment is to create a physically accurate simulation of an optical microscope. This should give you an idea of how to treat an imaging system as a black box linear system, by performing filtering in the Fourier domain. This type of model is also applicable to imaging with other EM radiation, ultrasound, MRI, CT etc. Before I forget, I'd like to thank Eric Thompson for helping me translate a simple model that I originally wrote in Matlab into Python.This simulation will: 1. Illuminate a thin sample (with finite thickness variations) with light from a particular angle2. The emerging light will then propagate from the sample to the microscope lens,3. The light will be filtered by the microscope lens, 4. And then will continue to the image sensor and will be detected by the image sensor.Because things are small within a microscope, you have to treat light as a wave. So, we'll be defining the sample, illumination and lens effects as complex-valued vectors.As a first step, you should define all of the variables of interest and an (x,y) coordinate system for the sample. The variables will include the size of the sample, which we can make 0.25 mm (this is a normal size for a microscope sample), the number of discrete elements we'll split the sample up into (1000), the wavelength of light ($\lambda$=0.5 $\mu$m) and the size of the smallest feature that we'll be able to see within the simulated sample, $\Delta x$, which we'll set at half the wavelength of light. You can use the np.linspace function to create x and y axes, and the np.meshgrid function to generate a 2D array of x and y values that will be useful later.
###Code
# define the characteristics of light
wavelength = .5e-3 # units are mm
delta_x = 0.5*wavelength
num_samples = 1000
# Define the spatial coordinates of the sample
starting_coordinate = (-num_samples/2) * delta_x
ending_coordinate = (num_samples/2 - 1) * delta_x
#make linspace, meshgrid coordinates as needed
x = np.linspace(starting_coordinate,ending_coordinate,num=num_samples)
y = np.linspace(starting_coordinate,ending_coordinate,num=num_samples)
[xx, yy] = np.meshgrid(x,y)
###Output
_____no_output_____
###Markdown
Next, read in an image to use as the test sample. I have included a test target image that is useful to check the resolution of the microscope with. In addition to simulating a sample with this image, please feel free to also use another image of your choice to create a simulated sample. For the assignment, please use the test target image to simulate two different types of sample: one that has both absorption and phase delay (as in the code below), and then later for question (c), one that is only absorptive.
###Code
#Define sample absorption
sample = plt.imread('resolution_target.png')
sample = sample/sample.max()
#Add in sample phase delay
sample_phase = sample
optical_thickness = 20 * wavelength
sample = sample * np.exp(1j * sample_phase*optical_thickness/wavelength)
#complex exponential represents phase delay
#show absolute value of sample = its absorption
plt.figure()
plt.imshow(np.abs(sample), extent=(x[0], x[-1], y[0], y[-1]))
plt.title('The perfect sample')
plt.xlabel('mm'); plt.ylabel('mm'); plt.gray()
###Output
_____no_output_____
###Markdown
Next, let's model a plane wave hitting this thin sample. I've written down the general form of a plane wave for you guys below. Note that you can simulate the plane wave such that it hits the sample at any desired angle ($\theta_x$,$\theta_y$).
###Code
from scipy.signal import convolve2d as conv
#Define plane wave
plane_wave_angle_x = 0 * np.pi/180
plane_wave_angle_y = 0 * np.pi/180
illumination_plane_wave = np.exp(1j*2*np.pi/wavelength * (np.sin(plane_wave_angle_x) * xx + np.sin(plane_wave_angle_y) * yy))
#Define field emerging from sample
emerging_field = conv(illumination_plane_wave, sample, mode='same')
###Output
_____no_output_____
###Markdown
Now, let's propagate this field to the lens aperture plane via a Fourier transform, to create the sample spectrum. It is also helpful to define a set of coordinates $(f_x,f_y)$ at this Fourier transform plane. You can use the $(x,y)$ coordinates that you formed above, as well as the relationship $2f_x^{max}=1/\Delta x$, to define the $(f_x,f_y)$ coordinates. That is, the full range of the spatial frequency axis is inversely proportional to the smallest step size in the spatial axis. Please go ahead and plot the magnitude of the sample spectrum with a set of marked and labeled axes (like for the sample in space). It is helpful to plot it on a log scale for visualization.
###Code
#define total range of spatial frequency axis, 1/mm
fmax = 1/delta_x
starting = -fmax
ending = fmax
#make linspace, meshgrid as needed
fx = np.linspace(starting, ending, num=num_samples)
fy = np.linspace(starting, ending, num=num_samples)
[fxx, fyy] = np.meshgrid(fx,fy)
# Take 2D fourier transform of sample
FT_sample = np.fft.fft2(sample)
# plot the Fourier transform of the sample in inverse mm coordinates
plt.figure()
plt.imshow( np.log(np.abs(FT_sample)), extent= (fx[0], fx[-1], fy[0], fy[-1]) )
plt.title('Fourier transform of the sample')
plt.xlabel('1/mm'); plt.ylabel('1/mm'); plt.gray()
np.shape(FT_sample)
###Output
_____no_output_____
###Markdown
Next, define the lens transfer function as a circle with a finite radius in the spatial frequency domain. Inside the circle the value of the transfer function is 1, and outside it is 0. Let's make the lens transfer function diameter 1/4th the total spatial frequency axis coordinates. The diameter is set by a parameter called the lens numerical aperture.
###Code
#Define lens numerical aperture as percentage of total width of spatial freqeuncy domain
L = ending-starting
d = 1/4*L
#Define lens transfer function as matrix with 1's within desired radius, 0's outside
lens = np.zeros(np.shape(FT_sample))
for i in range(0,1000):
for j in range(0,1000):
if (i-500)**2+(j-500)**2<=np.pi*(d/2)**2:
lens[i,j] = 1
# Plot what the transfer function looks like
plt.imshow(lens, extent = (-250,250,-250,250))
plt.title('lens transfer function')
plt.xlabel('1/mm'); plt.ylabel('1/mm'); plt.gray()
np.shape(lens)
###Output
_____no_output_____
###Markdown
You can now filter the sample spectrum with the lens transfer function, propagate this filtered spectrum to the image plane, and sample it on a detector that only detects the intensity of light, as we've shown in class. Let's assume the magnification of the lens is 5X (meaning the image of the sample at the detector plane is 5X larger than it is at the lens plane). Please display the resulting image on a new coordinate system, $(x',y')$ which represent the coordinates at the detector plane.
###Code
#Create filtered sample spectrum
len_filter = np.fft.fft2(lens)
filtered = FT_sample*len_filter
#Define spatial coordinates at image plane, using magnification
ix = np.linspace(starting/5, ending/5, num=num_samples)
iy = np.linspace(starting/5, ending/5, num=num_samples)
[ixx, iyy] = np.meshgrid(ix,iy)
#Propagate filtered sample spectrum to image plane
image = np.fft.ifft2(filtered)
#Detect intensity (squared magnitude) of resulting field on sensor
mag_image = np.abs(image)**2
#Plot resulting image
plt.imshow(mag_image, extent=(-50,50,-50,50))
###Output
_____no_output_____
###Markdown
Ok, you've simulated a microscope image! Great! Now let's try to change a few parameters to see what happens. Please try out the following tests and briefly answer the following questions:(a) Let's try changing the illumination angle by 5 degrees. What happens to sample spectrum at the aperture plane? Why does that change the appearance of the image? Try again with a larger angle of illumination that changes the appearance of the image dramatically, such that the background of the image becomes black. This is called a dark field image. Why is there a transition from an image with a bright background to a dark background, and under what illumination angle conditions does this occur?(b) Let's also change the lens numerical aperture. Instead of a circle having a diameter that is 25% the width of the frequency domain, let's try a smaller lens with 10%. How does the appearance of the image change? And why? Next, let's try a wider lens with 50%. Describe how the appearance of the image changes and why. (c) In the code that we provided, the sample both absorbed light and phase-delayed it at different locations across its surface. Now try to repeat the above exercise with a perfectly flat sample, that only absorbs light, and provides a constant phase delay across its surface. How does the sample spectrum change when you remove the phase delay term? How does this alter the appearence of the image, if at all, at different illumination angles?(d) (bonus problem for extra credit) The lens aperture does not have to be a circle - it can be whatever shape you want. Go ahead and add an "apodizer" into the lens, which is (literally) a black circle marked onto the center of the lens. You can model this dark circle by making the center of the lens aperture circle zero, up to some first radius, then the lens aperture is 1 up to some second radius, and then the lens aperture ends and everything is zero again (this will form a ring). How does the appearance of the resulting image change, and why? (a) (b) (c) (d)
###Code
d1 = 1/8*L
d2 = 1/6*L
len2 = np.zeros(np.shape(FT_sample))
for i in range(0,1000):
for j in range(0,1000):
if (i-500)**2+(j-500)**2>=np.pi*(d1/2)**2 and (i-500)**2+(j-500)**2<=np.pi*(d2/2)**2:
len2[i,j] = 1
plt.imshow(len2,extent=(-250,250,-250,250))
###Output
_____no_output_____
###Markdown
Homework 4: Let's simulate a microscopeDue Date: Friday, April 5th at midnightThe goal of this homework assignment is to create a physically accurate simulation of an optical microscope. This should give you an idea of how to treat an imaging system as a black box linear system, by performing filtering in the Fourier domain. This type of model is also applicable to imaging with other EM radiation, ultrasound, MRI, CT etc. Before I forget, I'd like to thank Eric Thompson for helping me translate a simple model that I originally wrote in Matlab into Python.This simulation will: 1. Illuminate a thin sample (with finite thickness variations) with light from a particular angle2. The emerging light will then propagate from the sample to the microscope lens,3. The light will be filtered by the microscope lens, 4. And then will continue to the image sensor and will be detected by the image sensor.Because things are small within a microscope, you have to treat light as a wave. So, we'll be defining the sample, illumination and lens effects as complex-valued vectors.As a first step, you should define all of the variables of interest and an (x,y) coordinate system for the sample. The variables will include the size of the sample, which we can make 0.25 mm (this is a normal size for a microscope sample), the number of discrete elements we'll split the sample up into (1000), the wavelength of light ($\lambda$=0.5 $\mu$m) and the size of the smallest feature that we'll be able to see within the simulated sample, $\Delta x$, which we'll set at half the wavelength of light. You can use the np.linspace function to create x and y axes, and the np.meshgrid function to generate a 2D array of x and y values that will be useful later.
###Code
wavelength = .5e-3 # units are mm
delta_x =
num_samples =
# Define the spatial coordinates of the sample
starting_coordinate = (-num_samples/2) * delta_x
ending_coordinate = (num_samples/2 - 1) * delta_x
#make linspace, meshgrid coordinates as needed
x = np.linspace...
[xx, yy] = meshgrid...
###Output
_____no_output_____
###Markdown
Next, read in an image to use as the test sample. I have included a test target image that is useful to check the resolution of the microscope with. In addition to simulating a sample with this image, please feel free to also use another image of your choice to create a simulated sample. For the assignment, please use the test target image to simulate two different types of sample: one that has both absorption and phase delay (as in the code below), and then later for question (c), one that is only absorptive.
###Code
#Define sample absorption
sample = plt.imread('resolution_target.png')
sample = sample/sample.max()
#Add in sample phase delay
sample_phase = sample
optical_thickness = 20 * wavelength
sample = sample * np.exp(1j * sample_phase*optical_thickness/wavelength) #complex exponential represents phase delay
#show absolute value of sample = its absorption
plt.figure()
plt.imshow(np.abs(sample), extent=(x[0], x[-1], y[0], y[-1]))
plt.title('The perfect sample')
plt.xlabel('mm'); plt.ylabel('mm'); plt.gray()
###Output
_____no_output_____
###Markdown
Next, let's model a plane wave hitting this thin sample. I've written down the general form of a plane wave for you guys below. Note that you can simulate the plane wave such that it hits the sample at any desired angle ($\theta_x$,$\theta_y$).
###Code
#Define plane wave
plane_wave_angle_x = 0 * np.pi/180
plane_wave_angle_y = 0 * np.pi/180
illumination_plane_wave = np.exp(1j*2*np.pi/wavelength * (np.sin(plane_wave_angle_x) * xx + np.sin(plane_wave_angle_y) * yy))
#Define field emerging from sample
emerging_field =
###Output
_____no_output_____
###Markdown
Now, let's propagate this field to the lens aperture plane via a Fourier transform, to create the sample spectrum. It is also helpful to define a set of coordinates $(f_x,f_y)$ at this Fourier transform plane. You can use the $(x,y)$ coordinates that you formed above, as well as the relationship $2f_x^{max}=1/\Delta x$, to define the $(f_x,f_y)$ coordinates. That is, the full range of the spatial frequency axis is inversely proportional to the smallest step size in the spatial axis. Please go ahead and plot the magnitude of the sample spectrum with a set of marked and labeled axes (like for the sample in space). It is helpful to plot it on a log scale for visualization.
###Code
#define total range of spatial frequency axis, 1/mm
#make linspace, meshgrid as needed
# Take 2D fourier transform of sample
# plot the Fourier transform of the sample in inverse mm coordinates
###Output
_____no_output_____
###Markdown
Next, define the lens transfer function as a circle with a finite radius in the spatial frequency domain. Inside the circle the value of the transfer function is 1, and outside it is 0. Let's make the lens transfer function diameter 1/4th the total spatial frequency axis coordinates. The diameter is set by a parameter called the lens numerical aperture.
###Code
#Define lens numerical aperture as percentage of total width of spatial freqeuncy domain
#Define lens transfer function as matrix with 1's within desired radius, 0's outside
# Plot what the transfer function looks like
###Output
_____no_output_____
###Markdown
You can now filter the sample spectrum with the lens transfer function, propagate this filtered spectrum to the image plane, and sample it on a detector that only detects the intensity of light, as we've shown in class. Let's assume the magnification of the lens is 5X (meaning the image of the sample at the detector plane is 5X larger than it is at the lens plane). Please display the resulting image on a new coordinate system, $(x',y')$ which represent the coordinates at the detector plane.
###Code
#Create filtered sample spectrum
#Define spatial coordinates at image plane, using magnification
#Propagate filtered sample spectrum to image plane
#Detect intensity (squared magnitude) of resulting field on sensor
#Plot resulting image
###Output
_____no_output_____
###Markdown
Homework 4: Let's simulate a microscopeDue Date: Friday, April 5th at midnightThe goal of this homework assignment is to create a physically accurate simulation of an optical microscope. This should give you an idea of how to treat an imaging system as a black box linear system, by performing filtering in the Fourier domain. This type of model is also applicable to imaging with other EM radiation, ultrasound, MRI, CT etc. Before I forget, I'd like to thank Eric Thompson for helping me translate a simple model that I originally wrote in Matlab into Python.This simulation will: 1. Illuminate a thin sample (with finite thickness variations) with light from a particular angle2. The emerging light will then propagate from the sample to the microscope lens,3. The light will be filtered by the microscope lens, 4. And then will continue to the image sensor and will be detected by the image sensor.Because things are small within a microscope, you have to treat light as a wave. So, we'll be defining the sample, illumination and lens effects as complex-valued vectors.As a first step, you should define all of the variables of interest and an (x,y) coordinate system for the sample. The variables will include the size of the sample, which we can make 0.25 mm (this is a normal size for a microscope sample), the number of discrete elements we'll split the sample up into (1000), the wavelength of light ($\lambda$=0.5 $\mu$m) and the size of the smallest feature that we'll be able to see within the simulated sample, $\Delta x$, which we'll set at half the wavelength of light. You can use the np.linspace function to create x and y axes, and the np.meshgrid function to generate a 2D array of x and y values that will be useful later.
###Code
wavelength = .5e-3 # units are mm
delta_x =
num_samples =
# Define the spatial coordinates of the sample
starting_coordinate = (-num_samples/2) * delta_x
ending_coordinate = (num_samples/2 - 1) * delta_x
#make linspace, meshgrid coordinates as needed
x = np.linspace...
[xx, yy] = meshgrid...
###Output
_____no_output_____
###Markdown
Next, read in an image to use as the test sample. I have included a test target image that is useful to check the resolution of the microscope with. In addition to simulating a sample with this image, please feel free to also use another image of your choice to create a simulated sample. For the assignment, please use the test target image to simulate two different types of sample: one that has both absorption and phase delay (as in the code below), and then later for question (c), one that is only absorptive.
###Code
#Define sample absorption
sample = plt.imread('resolution_target.png')
sample = sample/sample.max()
#Add in sample phase delay
sample_phase = sample
optical_thickness = 20 * wavelength
sample = sample * np.exp(1j * sample_phase*optical_thickness/wavelength) #complex exponential represents phase delay
#show absolute value of sample = its absorption
plt.figure()
plt.imshow(np.abs(sample), extent=(x[0], x[-1], y[0], y[-1]))
plt.title('The perfect sample')
plt.xlabel('mm'); plt.ylabel('mm'); plt.gray()
###Output
_____no_output_____
###Markdown
Next, let's model a plane wave hitting this thin sample. I've written down the general form of a plane wave for you guys below. Note that you can simulate the plane wave such that it hits the sample at any desired angle ($\theta_x$,$\theta_y$).
###Code
#Define plane wave
plane_wave_angle_x = 0 * np.pi/180
plane_wave_angle_y = 0 * np.pi/180
illumination_plane_wave = np.exp(1j*2*np.pi/wavelength * (np.sin(plane_wave_angle_x) * xx + np.sin(plane_wave_angle_y) * yy))
#Define field emerging from sample
emerging_field =
###Output
_____no_output_____
###Markdown
Now, let's propagate this field to the lens aperture plane via a Fourier transform, to create the sample spectrum. It is also helpful to define a set of coordinates $(f_x,f_y)$ at this Fourier transform plane. You can use the $(x,y)$ coordinates that you formed above, as well as the relationship $2f_x^{max}=1/\Delta x$, to define the $(f_x,f_y)$ coordinates. That is, the full range of the spatial frequency axis is inversely proportional to the smallest step size in the spatial axis. Please go ahead and plot the magnitude of the sample spectrum with a set of marked and labeled axes (like for the sample in space). It is helpful to plot it on a log scale for visualization.
###Code
#define total range of spatial frequency axis, 1/mm
#make linspace, meshgrid as needed
# Take 2D fourier transform of sample
# plot the Fourier transform of the sample in inverse mm coordinates
###Output
_____no_output_____
###Markdown
Next, define the lens transfer function as a circle with a finite radius in the spatial frequency domain. Inside the circle the value of the transfer function is 1, and outside it is 0. Let's make the lens transfer function diameter 1/4th the total spatial frequency axis coordinates. The diameter is set by a parameter called the lens numerical aperture.
###Code
#Define lens numerical aperture as percentage of total width of spatial freqeuncy domain
#Define lens transfer function as matrix with 1's within desired radius, 0's outside
# Plot what the transfer function looks like
###Output
_____no_output_____
###Markdown
You can now filter the sample spectrum with the lens transfer function, propagate this filtered spectrum to the image plane, and sample it on a detector that only detects the intensity of light, as we've shown in class. Let's assume the magnification of the lens is 5X (meaning the image of the sample at the detector plane is 5X larger than it is at the lens plane). Please display the resulting image on a new coordinate system, $(x',y')$ which represent the coordinates at the detector plane.
###Code
#Create filtered sample spectrum
#Define spatial coordinates at image plane, using magnification
#Propagate filtered sample spectrum to image plane
#Detect intensity (squared magnitude) of resulting field on sensor
#Plot resulting image
###Output
_____no_output_____ |
notebook/mha.ipynb | ###Markdown
Colab View Source Multi-Headed Attention
###Code
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
%matplotlib inline
%config InlineBackend.figure_format='retina'
print ("PyTorch version:[%s]."%(torch.__version__))
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print ("device:[%s]."%(device))
###Output
_____no_output_____
###Markdown
Scaled Dot-Product Attention (SDPA)- Data $X \in \mathbb{R}^{n \times d}$ where $n$ is the number data and $d$ is the data dimension- Query and Key $Q, K \in \mathbb{R}^{n \times d_K}$ - Value $V \in \mathbb{R}^{n \times d_V} $$\text{Attention}(Q,K,V) = \text{softmax} \left( \frac{QK^T}{\sqrt{d_K}} \right)V \in \mathbb{R}^{n \times d_V} $
###Code
class ScaledDotProductAttention(nn.Module):
def forward(self,Q,K,V,mask=None):
d_K = K.size()[-1] # key dimension
scores = # FILL IN HERE
if mask is not None:
scores = scores.masked_fill(mask==0, -1e9)
attention = F.softmax(scores,dim=-1)
out = attention.matmul(V)
return out,attention
# Demo run of scaled dot product attention
SPDA = ScaledDotProductAttention()
n_batch,d_K,d_V = 3,128,256 # d_K(=d_Q) does not necessarily be equal to d_V
n_Q,n_K,n_V = 30,50,50
Q = torch.rand(n_batch,n_Q,d_K)
K = torch.rand(n_batch,n_K,d_K)
V = torch.rand(n_batch,n_V,d_V)
out,attention = SPDA.forward(Q,K,V,mask=None)
def sh(x): return str(x.shape)[11:-1]
print ("SDPA: Q%s K%s V%s => out%s attention%s"%
(sh(Q),sh(K),sh(V),sh(out),sh(attention)))
# It supports 'multi-headed' attention
n_batch,n_head,d_K,d_V = 3,5,128,256
n_Q,n_K,n_V = 30,50,50 # n_K and n_V should be the same
Q = torch.rand(n_batch,n_head,n_Q,d_K)
K = torch.rand(n_batch,n_head,n_K,d_K)
V = torch.rand(n_batch,n_head,n_V,d_V)
out,attention = SPDA.forward(Q,K,V,mask=None)
# out: [n_batch x n_head x n_Q x d_V]
# attention: [n_batch x n_head x n_Q x n_K]
def sh(x): return str(x.shape)[11:-1]
print ("(Multi-Headed) SDPA: Q%s K%s V%s => out%s attention%s"%
(sh(Q),sh(K),sh(V),sh(out),sh(attention)))
###Output
_____no_output_____
###Markdown
Multi-Headed Attention (MHA)$\text{head}_{\color{red}i} = \text{Attention}(Q {\color{green}W}^Q_{\color{red}i},K {\color{green}W}^K_{\color{red}i}, V {\color{green}W}^V_{\color{red}i}) $
###Code
class MultiHeadedAttention(nn.Module):
def __init__(self,d_feat=128,n_head=5,actv=F.relu,USE_BIAS=True,dropout_p=0.1,device=None):
"""
:param d_feat: feature dimension
:param n_head: number of heads
:param actv: activation after each linear layer
:param USE_BIAS: whether to use bias
:param dropout_p: dropout rate
:device: which device to use (e.g., cuda:0)
"""
super(MultiHeadedAttention,self).__init__()
if (d_feat%n_head) != 0:
raise ValueError("d_feat(%d) should be divisible by b_head(%d)"%(d_feat,n_head))
self.d_feat = d_feat
self.n_head = n_head
self.d_head = self.d_feat // self.n_head
self.actv = actv
self.USE_BIAS = USE_BIAS
self.dropout_p = dropout_p # prob. of zeroed
self.lin_Q = nn.Linear(self.d_feat,self.d_feat,self.USE_BIAS)
self.lin_K = nn.Linear(self.d_feat,self.d_feat,self.USE_BIAS)
self.lin_V = nn.Linear(self.d_feat,self.d_feat,self.USE_BIAS)
self.lin_O = nn.Linear(self.d_feat,self.d_feat,self.USE_BIAS)
self.dropout = nn.Dropout(p=self.dropout_p)
def forward(self,Q,K,V,mask=None):
"""
:param Q: [n_batch, n_Q, d_feat]
:param K: [n_batch, n_K, d_feat]
:param V: [n_batch, n_V, d_feat] <= n_K and n_V must be the same
:param mask:
"""
n_batch = Q.shape[0]
Q_feat = self.lin_Q(Q)
K_feat = self.lin_K(K)
V_feat = self.lin_V(V)
# Q_feat: [n_batch, n_Q, d_feat]
# K_feat: [n_batch, n_K, d_feat]
# V_feat: [n_batch, n_V, d_feat]
# Multi-head split of Q, K, and V (d_feat = n_head*d_head)
Q_split = Q_feat.view(n_batch, -1, self.n_head, self.d_head).permute(0, 2, 1, 3)
K_split = K_feat.view(n_batch, -1, self.n_head, self.d_head).permute(0, 2, 1, 3)
V_split = V_feat.view(n_batch, -1, self.n_head, self.d_head).permute(0, 2, 1, 3)
# Q_split: [n_batch, n_head, n_Q, d_head]
# K_split: [n_batch, n_head, n_K, d_head]
# V_split: [n_batch, n_head, n_V, d_head]
# Multi-Headed Attention
d_K = K.size()[-1] # key dimension
scores = # FILL IN HERE
if mask is not None:
scores = scores.masked_fill(mask==0,-1e9)
attention = torch.softmax(scores,dim=-1)
x_raw = torch.matmul(self.dropout(attention),V_split) # dropout is NOT mentioned in the paper
# attention: [n_batch, n_head, n_Q, n_K]
# x_raw: [n_batch, n_head, n_Q, d_head]
# Reshape x
x_rsh1 = x_raw.permute(0,2,1,3).contiguous()
# x_rsh1: [n_batch, n_Q, n_head, d_head]
x_rsh2 = x_rsh1.view(n_batch,-1,self.d_feat)
# x_rsh2: [n_batch, n_Q, d_feat]
# Linear
x = self.lin_O(x_rsh2)
# x: [n_batch, n_Q, d_feat]
out = {'Q_feat':Q_feat,'K_feat':K_feat,'V_feat':V_feat,
'Q_split':Q_split,'K_split':K_split,'V_split':V_split,
'scores':scores,'attention':attention,
'x_raw':x_raw,'x_rsh1':x_rsh1,'x_rsh2':x_rsh2,'x':x}
return out
# Self-Attention Layer
n_batch = 128
n_src = 32
d_feat = 200
n_head = 5
src = torch.rand(n_batch,n_src,d_feat)
self_attention = MultiHeadedAttention(
d_feat=d_feat,n_head=n_head,actv=F.relu,USE_BIAS=True,dropout_p=0.1,device=device)
out = self_attention.forward(src,src,src,mask=None)
Q_feat,K_feat,V_feat = out['Q_feat'],out['K_feat'],out['V_feat']
Q_split,K_split,V_split = out['Q_split'],out['K_split'],out['V_split']
scores,attention = out['scores'],out['attention']
x_raw,x_rsh1,x_rsh2,x = out['x_raw'],out['x_rsh1'],out['x_rsh2'],out['x']
# Print out shapes
def sh(_x): return str(_x.shape)[11:-1]
print ("Input src:\t%s \t= [n_batch, n_src, d_feat]"%(sh(src)))
print ()
print ("Q_feat: \t%s \t= [n_batch, n_src, d_feat]"%(sh(Q_feat)))
print ("K_feat: \t%s \t= [n_batch, n_src, d_feat]"%(sh(K_feat)))
print ("V_feat: \t%s \t= [n_batch, n_src, d_feat]"%(sh(V_feat)))
print ()
print ("Q_split: \t%s \t= [n_batch, n_head, n_src, d_head]"%(sh(Q_split)))
print ("K_split: \t%s \t= [n_batch, n_head, n_src, d_head]"%(sh(K_split)))
print ("V_split: \t%s \t= [n_batch, n_head, n_src, d_head]"%(sh(V_split)))
print ()
print ("scores: \t%s \t= [n_batch, n_head, n_src, n_src]"%(sh(scores)))
print ("attention:\t%s \t= [n_batch, n_head, n_src, n_src]"%(sh(attention)))
print ()
print ("x_raw: \t%s \t= [n_batch, n_head, n_src, d_head]"%(sh(x_raw)))
print ("x_rsh1: \t%s \t= [n_batch, n_src, n_head, d_head]"%(sh(x_rsh1)))
print ("x_rsh2: \t%s \t= [n_batch, n_src, d_feat]"%(sh(x_rsh2)))
print ()
print ("Output x: \t%s \t= [n_batch, n_src, d_feat]"%(sh(x)))
###Output
_____no_output_____ |
Papermill Runner.ipynb | ###Markdown
ManualThis notebook takes a input notebook and a list of parameters and then saves them as separate notebooks in the given output directory* input_notebook path to the input ipython notebook* output directory output directory where the notebooks would be stored. This defaults to the home directory* params parameters to the notebook, should be a tuple of dictionaries. Each of the parameters are run as a separate notebook. The number of parameters can be different for each notebook.* names Tuple with the list of names for output notebook. This is to easily identify the output notebooks. In case no names are given, then names are generated automatically in the format name0, name1 etc., Note-----1. Notebooks are run sequentially2. If full list of names are not provided, names are generated for the remaining notebooks3. Notebooks are always saved with ipynb extension4. If your notebook write files such as csv, then include this logic in your notebook to save each output as a separate file. You include include a name parameter in your notebook to solve this problem
###Code
## parameters
input_notebook:str = "Example.ipynb"
output_directory:str = os.environ['HOME']
params:Tuple[Dict] = (
{'x': 10, 'y': 20},
{'x': 30, 'y': 50}
)
names:Tuple[str] = ()
# Add dummy names for output notebook
if len(names) < len(params):
missing_names = len(params) - len(names)
names = names + tuple([f"name{i}" for i in range(missing_names)])
print(names)
for param, name in zip(params, names):
if not(name.endswith('ipynb')):
name = f"{name}.ipynb"
pm.execute_notebook(
input_notebook,
output_path=os.path.join(output_directory, name),
parameters=param,
)
###Output
_____no_output_____ |
quests/tpu/flowers_resnet.ipynb | ###Markdown
Image Classification from scratch with TPUs on Cloud ML Engine using ResNetThis notebook demonstrates how to do image classification from scratch on a flowers dataset using TPUs and the resnet trainer.
###Code
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.9'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
###Output
_____no_output_____
###Markdown
Convert JPEG images to TensorFlow RecordsMy dataset consists of JPEG images in Google Cloud Storage. I have two CSV files that are formatted as follows: image-name, categoryInstead of reading the images from JPEG each time, we'll convert the JPEG data and store it as TF Records.
###Code
%%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | head -5 > /tmp/input.csv
cat /tmp/input.csv
%%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt
cat /tmp/labels.txt
###Output
_____no_output_____
###Markdown
Clone the TPU repoLet's git clone the repo and get the preprocessing and model files. The model code has imports of the form:import resnet_model as model_libWe will need to change this to:from . import resnet_model as model_lib
###Code
%%writefile copy_resnet_files.sh
#!/bin/bash
rm -rf tpu
git clone https://github.com/tensorflow/tpu
cd tpu
TFVERSION=$1
echo "Switching to version r$TFVERSION"
git checkout r$TFVERSION
cd ..
MODELCODE=tpu/models/official/resnet
OUTDIR=mymodel
rm -rf $OUTDIR
# preprocessing
cp -r imgclass $OUTDIR # brings in setup.py and __init__.py
cp tpu/tools/datasets/jpeg_to_tf_record.py $OUTDIR/trainer/preprocess.py
# model: fix imports
for FILE in $(ls -p $MODELCODE | grep -v /); do
CMD="cat $MODELCODE/$FILE "
for f2 in $(ls -p $MODELCODE | grep -v /); do
MODULE=`echo $f2 | sed 's/.py//g'`
CMD="$CMD | sed 's/^import ${MODULE}/from . import ${MODULE}/g' "
done
CMD="$CMD > $OUTDIR/trainer/$FILE"
eval $CMD
done
find $OUTDIR
echo "Finished copying files into $OUTDIR"
!bash ./copy_resnet_files.sh $TFVERSION
###Output
_____no_output_____
###Markdown
Enable TPU service accountAllow Cloud ML Engine to access the TPU and bill to your project
###Code
%%writefile enable_tpu_mlengine.sh
SVC_ACCOUNT=$(curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://ml.googleapis.com/v1/projects/${PROJECT}:getConfig \
| grep tpuServiceAccount | tr '"' ' ' | awk '{print $3}' )
echo "Enabling TPU service account $SVC_ACCOUNT to act as Cloud ML Service Agent"
gcloud projects add-iam-policy-binding $PROJECT \
--member serviceAccount:$SVC_ACCOUNT --role roles/ml.serviceAgent
echo "Done"
!bash ./enable_tpu_mlengine.sh
###Output
_____no_output_____
###Markdown
Try preprocessing locally
###Code
%%bash
export PYTHONPATH=${PYTHONPATH}:${PWD}/mymodel
rm -rf /tmp/out
python -m trainer.preprocess \
--train_csv /tmp/input.csv \
--validation_csv /tmp/input.csv \
--labels_file /tmp/labels.txt \
--project_id $PROJECT \
--output_dir /tmp/out --runner=DirectRunner
!ls -l /tmp/out
###Output
_____no_output_____
###Markdown
Now run it over full training and evaluation datasets. This will happen in Cloud Dataflow.
###Code
%%bash
export PYTHONPATH=${PYTHONPATH}:${PWD}/mymodel
gsutil -m rm -rf gs://${BUCKET}/tpu/resnet/data
python -m trainer.preprocess \
--train_csv gs://cloud-ml-data/img/flower_photos/train_set.csv \
--validation_csv gs://cloud-ml-data/img/flower_photos/eval_set.csv \
--labels_file /tmp/labels.txt \
--project_id $PROJECT \
--output_dir gs://${BUCKET}/tpu/resnet/data
###Output
_____no_output_____
###Markdown
The above preprocessing step will take 15-20 minutes. Wait for the job to finish before you proceed. Navigate to [Cloud Dataflow section of GCP web console](https://console.cloud.google.com/dataflow) to monitor job progress. You will see something like this Alternately, you can simply copy my already preprocessed files and proceed to the next step:gsutil -m cp gs://cloud-training-demos/tpu/resnet/data/* gs://${BUCKET}/tpu/resnet/copied_data
###Code
%%bash
gsutil ls gs://${BUCKET}/tpu/resnet/data
###Output
_____no_output_____
###Markdown
Train on the Cloud
###Code
%%bash
echo -n "--num_train_images=$(gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | wc -l) "
echo -n "--num_eval_images=$(gsutil cat gs://cloud-ml-data/img/flower_photos/eval_set.csv | wc -l) "
echo "--num_label_classes=$(cat /tmp/labels.txt | wc -l)"
%%bash
TOPDIR=gs://${BUCKET}/tpu/resnet
OUTDIR=${TOPDIR}/trained
JOBNAME=imgclass_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR # Comment out this line to continue training from the last time
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.resnet_main \
--package-path=$(pwd)/mymodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_TPU \
--runtime-version=$TFVERSION --python-version=3.5 \
-- \
--data_dir=${TOPDIR}/data \
--model_dir=${OUTDIR} \
--resnet_depth=18 \
--train_batch_size=128 --eval_batch_size=32 --skip_host_call=True \
--steps_per_eval=250 --train_steps=1000 \
--num_train_images=3300 --num_eval_images=370 --num_label_classes=5 \
--export_dir=${OUTDIR}/export
###Output
_____no_output_____
###Markdown
The above training job will take 15-20 minutes. Wait for the job to finish before you proceed. Navigate to [Cloud ML Engine section of GCP web console](https://console.cloud.google.com/mlengine) to monitor job progress.The model should finish with a 80-83% accuracy (results will vary):```Eval results: {'global_step': 1000, 'loss': 0.7359053, 'top_1_accuracy': 0.82954544, 'top_5_accuracy': 1.0}```
###Code
%%bash
gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/
###Output
_____no_output_____
###Markdown
You can look at the training charts with TensorBoard:
###Code
OUTDIR = 'gs://{}/tpu/resnet/trained/'.format(BUCKET)
from google.datalab.ml import TensorBoard
TensorBoard().start(OUTDIR)
TensorBoard().stop(11531)
print("Stopped Tensorboard")
###Output
_____no_output_____
###Markdown
These were the charts I got (I set smoothing to be zero):As you can see, the final blue dot (eval) is quite close to the lowest training loss, indicating that the model hasn't overfit. The top_1 accuracy on the evaluation dataset, however, is 80% which isn't that great. More data would help. Deploying and predicting with modelDeploy the model:
###Code
%%bash
MODEL_NAME="flowers"
MODEL_VERSION=resnet
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/ | tail -1)
echo "Deleting/deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
# comment/uncomment the appropriate line to run. The first time around, you will need only the two create calls
# But during development, you might need to replace a version by deleting the version and creating it again
#gcloud ml-engine versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
###Output
_____no_output_____
###Markdown
We can use saved_model_cli to find out what inputs the model expects:
###Code
%%bash
saved_model_cli show --dir $(gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/ | tail -1) --tag_set serve --signature_def serving_default
###Output
_____no_output_____
###Markdown
As you can see, the model expects image_bytes. This is typically base64 encoded To predict with the model, let's take one of the example images that is available on Google Cloud Storage and convert it to a base64-encoded array
###Code
import base64, sys, json
import tensorflow as tf
import io
with tf.gfile.GFile('gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg', 'rb') as ifp:
with io.open('test.json', 'w') as ofp:
image_data = ifp.read()
img = base64.b64encode(image_data).decode('utf-8')
json.dump({"image_bytes": {"b64": img}}, ofp)
!ls -l test.json
###Output
_____no_output_____
###Markdown
Send it to the prediction service
###Code
%%bash
gcloud ml-engine predict --model=flowers --version=resnet --json-instances=./test.json
###Output
_____no_output_____
###Markdown
What does CLASS no. 3 correspond to? (remember that classes is 0-based)
###Code
%%bash
head -4 /tmp/labels.txt | tail -1
###Output
_____no_output_____
###Markdown
Here's how you would invoke those predictions without using gcloud
###Code
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import base64, sys, json
import tensorflow as tf
with tf.gfile.GFile('gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg', 'rb') as ifp:
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
request_data = {'instances':
[
{"image_bytes": {"b64": base64.b64encode(ifp.read()).decode('utf-8')}}
]}
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'flowers', 'resnet')
response = api.projects().predict(body=request_data, name=parent).execute()
print("response={0}".format(response))
###Output
_____no_output_____
###Markdown
Image Classification from scratch with TPUs on Cloud ML Engine using ResNetThis notebook demonstrates how to do image classification from scratch on a flowers dataset using TPUs and the resnet trainer.
###Code
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.8'
%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
###Output
Updated property [core/project].
Updated property [compute/region].
###Markdown
Convert JPEG images to TensorFlow RecordsMy dataset consists of JPEG images in Google Cloud Storage. I have two CSV files that are formatted as follows: image-name, categoryInstead of reading the images from JPEG each time, we'll convert the JPEG data and store it as TF Records.
###Code
%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | head -5 > /tmp/input.csv
cat /tmp/input.csv
%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt
cat /tmp/labels.txt
###Output
daisy
dandelion
roses
sunflowers
tulips
###Markdown
Clone the TPU repoLet's git clone the repo and get the preprocessing and model files. The model code has imports of the form:import resnet_model as model_libWe will need to change this to:from . import resnet_model as model_lib
###Code
%writefile copy_resnet_files.sh
#!/bin/bash
rm -rf tpu
git clone https://github.com/tensorflow/tpu
cd tpu
TFVERSION=$1
echo "Switching to version r$TFVERSION"
git checkout r$TFVERSION
cd ..
MODELCODE=tpu/models/official/resnet
OUTDIR=mymodel
rm -rf $OUTDIR
# preprocessing
cp -r imgclass $OUTDIR # brings in setup.py and __init__.py
cp tpu/tools/datasets/jpeg_to_tf_record.py $OUTDIR/trainer/preprocess.py
# model: fix imports
for FILE in $(ls -p $MODELCODE | grep -v /); do
CMD="cat $MODELCODE/$FILE "
for f2 in $(ls -p $MODELCODE | grep -v /); do
MODULE=`echo $f2 | sed 's/.py//g'`
CMD="$CMD | sed 's/^import ${MODULE}/from . import ${MODULE}/g' "
done
CMD="$CMD > $OUTDIR/trainer/$FILE"
eval $CMD
done
find $OUTDIR
echo "Finished copying files into $OUTDIR"
!bash ./copy_resnet_files.sh $TFVERSION
###Output
_____no_output_____
###Markdown
Enable TPU service accountAllow Cloud ML Engine to access the TPU and bill to your project
###Code
%writefile enable_tpu_mlengine.sh
SVC_ACCOUNT=$(curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://ml.googleapis.com/v1/projects/${PROJECT}:getConfig \
| grep tpuServiceAccount | tr '"' ' ' | awk '{print $3}' )
echo "Enabling TPU service account $SVC_ACCOUNT to act as Cloud ML Service Agent"
gcloud projects add-iam-policy-binding $PROJECT \
--member serviceAccount:$SVC_ACCOUNT --role roles/ml.serviceAgent
echo "Done"
!bash ./enable_tpu_mlengine.sh
###Output
_____no_output_____
###Markdown
Try preprocessing locally
###Code
%bash
export PYTHONPATH=${PYTHONPATH}:${PWD}/mymodel
rm -rf /tmp/out
python -m trainer.preprocess \
--train_csv /tmp/input.csv \
--validation_csv /tmp/input.csv \
--labels_file /tmp/labels.txt \
--project_id $PROJECT \
--output_dir /tmp/out --runner=DirectRunner
!ls -l /tmp/out
###Output
total 384
-rw-r--r-- 1 root root 195698 Jun 26 00:20 train-00000-of-00001
-rw-r--r-- 1 root root 195698 Jun 26 00:20 validation-00000-of-00001
###Markdown
Now run it over full training and evaluation datasets. This will happen in Cloud Dataflow.
###Code
%bash
export PYTHONPATH=${PYTHONPATH}:${PWD}/mymodel
gsutil -m rm -rf gs://${BUCKET}/tpu/resnet/data
python -m trainer.preprocess \
--train_csv gs://cloud-ml-data/img/flower_photos/train_set.csv \
--validation_csv gs://cloud-ml-data/img/flower_photos/eval_set.csv \
--labels_file /tmp/labels.txt \
--project_id $PROJECT \
--output_dir gs://${BUCKET}/tpu/resnet/data
###Output
_____no_output_____
###Markdown
The above preprocessing step will take 15-20 minutes. Wait for the job to finish before you proceed. Navigate to [Cloud Dataflow section of GCP web console](https://console.cloud.google.com/dataflow) to monitor job progress. You will see something like this Alternately, you can simply copy my already preprocessed files and proceed to the next step:gsutil -m cp gs://cloud-training-demos/tpu/resnet/data/* gs://${BUCKET}/tpu/resnet/copied_data
###Code
%bash
gsutil ls gs://${BUCKET}/tpu/resnet/data
###Output
gs://cloud-training-demos-ml/tpu/resnet/data/train-00000-of-00013
gs://cloud-training-demos-ml/tpu/resnet/data/train-00001-of-00013
gs://cloud-training-demos-ml/tpu/resnet/data/train-00002-of-00013
gs://cloud-training-demos-ml/tpu/resnet/data/train-00003-of-00013
gs://cloud-training-demos-ml/tpu/resnet/data/train-00004-of-00013
gs://cloud-training-demos-ml/tpu/resnet/data/train-00005-of-00013
gs://cloud-training-demos-ml/tpu/resnet/data/train-00006-of-00013
gs://cloud-training-demos-ml/tpu/resnet/data/train-00007-of-00013
gs://cloud-training-demos-ml/tpu/resnet/data/train-00008-of-00013
gs://cloud-training-demos-ml/tpu/resnet/data/train-00009-of-00013
gs://cloud-training-demos-ml/tpu/resnet/data/train-00010-of-00013
gs://cloud-training-demos-ml/tpu/resnet/data/train-00011-of-00013
gs://cloud-training-demos-ml/tpu/resnet/data/train-00012-of-00013
gs://cloud-training-demos-ml/tpu/resnet/data/validation-00000-of-00003
gs://cloud-training-demos-ml/tpu/resnet/data/validation-00001-of-00003
gs://cloud-training-demos-ml/tpu/resnet/data/validation-00002-of-00003
gs://cloud-training-demos-ml/tpu/resnet/data/tmp/
###Markdown
Train on the Cloud
###Code
%bash
echo -n "--num_train_images=$(gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | wc -l) "
echo -n "--num_eval_images=$(gsutil cat gs://cloud-ml-data/img/flower_photos/eval_set.csv | wc -l) "
echo "--num_label_classes=$(cat /tmp/labels.txt | wc -l)"
%bash
TOPDIR=gs://${BUCKET}/tpu/resnet
OUTDIR=${TOPDIR}/trained
JOBNAME=imgclass_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR # Comment out this line to continue training from the last time
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.resnet_main \
--package-path=$(pwd)/mymodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_TPU \
--runtime-version=$TFVERSION \
-- \
--data_dir=${TOPDIR}/data \
--model_dir=${OUTDIR} \
--resnet_depth=18 \
--train_batch_size=128 --eval_batch_size=32 --skip_host_call=True \
--steps_per_eval=250 --train_steps=1000 \
--num_train_images=3300 --num_eval_images=370 --num_label_classes=5 \
--export_dir=${OUTDIR}/export
###Output
_____no_output_____
###Markdown
The above training job will take 15-20 minutes. Wait for the job to finish before you proceed. Navigate to [Cloud ML Engine section of GCP web console](https://console.cloud.google.com/mlengine) to monitor job progress.
###Code
%bash
gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/
###Output
gs://cloud-training-demos-ml/tpu/resnet/trained/export/
gs://cloud-training-demos-ml/tpu/resnet/trained/export/1529987998/
###Markdown
You can look at the training charts with TensorBoard:
###Code
OUTDIR = 'gs://{}/tpu/resnet/trained/'.format(BUCKET)
from google.datalab.ml import TensorBoard
TensorBoard().start(OUTDIR)
TensorBoard().stop(11531)
print("Stopped Tensorboard")
###Output
Stopped Tensorboard
###Markdown
These were the charts I got (I set smoothing to be zero):As you can see, the final blue dot (eval) is quite close to the lowest training loss, indicating that the model hasn't overfit. The top_1 accuracy on the evaluation dataset, however, is 80% which isn't that great. More data would help. Deploying and predicting with modelDeploy the model:
###Code
%bash
MODEL_NAME="flowers"
MODEL_VERSION=resnet
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/ | tail -1)
echo "Deleting/deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
# comment/uncomment the appropriate line to run. The first time around, you will need only the two create calls
# But during development, you might need to replace a version by deleting the version and creating it again
#gcloud ml-engine versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
###Output
_____no_output_____
###Markdown
We can use saved_model_cli to find out what inputs the model expects:
###Code
%bash
saved_model_cli show --dir $(gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/ | tail -1) --tag_set serve --signature_def serving_default
###Output
The given SavedModel SignatureDef contains the following input(s):
inputs['image_bytes'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder:0
The given SavedModel SignatureDef contains the following output(s):
outputs['classes'] tensor_info:
dtype: DT_INT64
shape: (-1)
name: ArgMax:0
outputs['probabilities'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 5)
name: softmax_tensor:0
Method name is: tensorflow/serving/predict
###Markdown
As you can see, the model expects image_bytes. This is typically base64 encoded To predict with the model, let's take one of the example images that is available on Google Cloud Storage and convert it to a base64-encoded array
###Code
import base64, sys, json
import tensorflow as tf
with tf.gfile.FastGFile('gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg', 'r') as ifp:
with open('test.json', 'w') as ofp:
image_data = ifp.read()
img = base64.b64encode(image_data)
json.dump({"image_bytes": {"b64": img}}, ofp)
!ls -l test.json
###Output
-rw-r--r-- 1 root root 56992 Jun 26 05:33 test.json
###Markdown
Send it to the prediction service
###Code
%bash
gcloud ml-engine predict --model=flowers --version=resnet --json-instances=./test.json
###Output
CLASSES PROBABILITIES
3 [0.0012481402372941375, 0.0010495249880477786, 7.82029837864684e-06, 0.9976732134819031, 2.1333773474907503e-05]
###Markdown
What does CLASS no. 3 correspond to? (remember that classes is 0-based)
###Code
%bash
head -4 /tmp/labels.txt | tail -1
###Output
sunflowers
###Markdown
Here's how you would invoke those predictions without using gcloud
###Code
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import base64, sys, json
import tensorflow as tf
with tf.gfile.FastGFile('gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg', 'r') as ifp:
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
request_data = {'instances':
[
{"image_bytes": {"b64": base64.b64encode(ifp.read())}}
]}
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'flowers', 'resnet')
response = api.projects().predict(body=request_data, name=parent).execute()
print "response={0}".format(response)
###Output
response={u'predictions': [{u'probabilities': [0.0012481402372941375, 0.0010495249880477786, 7.82029837864684e-06, 0.9976732134819031, 2.1333773474907503e-05], u'classes': 3}]}
###Markdown
Flowers Image Classification with TPUs on Cloud ML Engine (ResNet)This notebook demonstrates how to do image classification from scratch on a flowers dataset using TPUs and the resnet trainer.
###Code
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.8'
%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
###Output
Updated property [core/project].
Updated property [compute/region].
###Markdown
Convert JPEG images to TensorFlow RecordsMy dataset consists of JPEG images in Google Cloud Storage. I have two CSV files that are formatted as follows: image-name, categoryInstead of reading the images from JPEG each time, we'll convert the JPEG data and store it as TF Records.
###Code
%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | head -5 > /tmp/input.csv
cat /tmp/input.csv
%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt
cat /tmp/labels.txt
###Output
daisy
dandelion
roses
sunflowers
tulips
###Markdown
Enable TPU service accountAllow Cloud ML Engine to access the TPU and bill to your project
###Code
%bash
SVC_ACCOUNT=$(curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://ml.googleapis.com/v1/projects/${PROJECT}:getConfig \
| grep tpuServiceAccount | tr '"' ' ' | awk '{print $3}' )
echo "Enabling TPU service account $SVC_ACCOUNT to act as Cloud ML Service Agent"
gcloud projects add-iam-policy-binding $PROJECT \
--member serviceAccount:$SVC_ACCOUNT --role roles/ml.serviceAgent
echo "Done"
###Output
_____no_output_____
###Markdown
Run preprocessingFirst try it out locally -- note that the inputs are all local paths
###Code
%bash
export PYTHONPATH=${PYTHONPATH}:${PWD}/imgclass
rm -rf /tmp/out
python -m trainer.preprocess \
--trainCsv /tmp/input.csv \
--validationCsv /tmp/input.csv \
--labelsFile /tmp/labels.txt \
--projectId $PROJECT \
--outputDir /tmp/out
!ls -l /tmp/out
!zcat /tmp/out/train-00000* | head
###Output
�l �+��
��
0
image/filename
754296579_30a9ae018c_n.jpg
image/format
JPEG
gzip: stdout: Broken pipe
###Markdown
Now run it over full training and evaluation datasets. This will happen in Cloud Dataflow.
###Code
%bash
export PYTHONPATH=${PYTHONPATH}:${PWD}/imgclass
gsutil -m rm -rf gs://${BUCKET}/tpu/resnet/data
python -m trainer.preprocess \
--trainCsv gs://cloud-ml-data/img/flower_photos/train_set.csv \
--validationCsv gs://cloud-ml-data/img/flower_photos/eval_set.csv \
--labelsFile /tmp/labels.txt \
--projectId $PROJECT \
--outputDir gs://${BUCKET}/tpu/resnet/data
###Output
_____no_output_____
###Markdown
The above preprocessing step will take 15-20 minutes. Wait for the job to finish before you proceed. Navigate to [Cloud Dataflow section of GCP web console](https://console.cloud.google.com/dataflow) to monitor job progress. You will see something like this Alternately, you can simply copy my already preprocessed files and proceed to the next step:gsutil -m cp gs://cloud-training-demos/tpu/resnet/data/* gs://${BUCKET}/tpu/resnet/copied_data
###Code
%bash
gsutil ls gs://${BUCKET}/tpu/resnet/data
###Output
gs://cloud-training-demos-ml/tpu/resnet/data/train-00000-of-00010
gs://cloud-training-demos-ml/tpu/resnet/data/train-00001-of-00010
gs://cloud-training-demos-ml/tpu/resnet/data/train-00002-of-00010
gs://cloud-training-demos-ml/tpu/resnet/data/train-00003-of-00010
gs://cloud-training-demos-ml/tpu/resnet/data/train-00004-of-00010
gs://cloud-training-demos-ml/tpu/resnet/data/train-00005-of-00010
gs://cloud-training-demos-ml/tpu/resnet/data/train-00006-of-00010
gs://cloud-training-demos-ml/tpu/resnet/data/train-00007-of-00010
gs://cloud-training-demos-ml/tpu/resnet/data/train-00008-of-00010
gs://cloud-training-demos-ml/tpu/resnet/data/train-00009-of-00010
gs://cloud-training-demos-ml/tpu/resnet/data/validation-00000-of-00004
gs://cloud-training-demos-ml/tpu/resnet/data/validation-00001-of-00004
gs://cloud-training-demos-ml/tpu/resnet/data/validation-00002-of-00004
gs://cloud-training-demos-ml/tpu/resnet/data/validation-00003-of-00004
gs://cloud-training-demos-ml/tpu/resnet/data/tmp/
###Markdown
Train on the Cloud Get the amoebanet code and package it up. This involves changing imports of the form:import resnet_model as model_libtofrom . import resnet_model as model_libAlso, there are three hardcoded constants in the code for the model:NUM_TRAIN_IMAGES = 1281167NUM_EVAL_IMAGES = 50000LABEL_CLASSES = 1000We'll change them to match our dataset.Then, submit to Cloud ML Engine
###Code
%bash
echo "NUM_TRAIN_IMAGES = $(gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | wc -l)"
echo "NUM_EVAL_IMAGES = $(gsutil cat gs://cloud-ml-data/img/flower_photos/eval_set.csv | wc -l)"
echo "LABEL_CLASSES = $(cat /tmp/labels.txt | wc -l)"
%bash
rm -rf tpu
git clone https://github.com/tensorflow/tpu
#cd tpu
#git checkout r${TFVERSION} # correct version
#cd ..
# copy over
MODELCODE=tpu/models/official/resnet
rm -rf tmp
mkdir -p tmp/trainer
touch tmp/trainer/__init__.py
for FILE in $(ls $MODELCODE); do
CMD="cat $MODELCODE/$FILE "
for f2 in $(ls $MODELCODE); do
MODULE=`echo $f2 | sed 's/.py//g'`
CMD="$CMD | sed 's/^import ${MODULE}/from . import ${MODULE}/g' "
done
echo "WARNING! Harcoding #train=3300 #eval=370 #labels=5 -- Change as needed"
CMD="$CMD | sed 's/^NUM_TRAIN_IMAGES = 1281167/NUM_TRAIN_IMAGES = 3300/g' "
CMD="$CMD | sed 's/^NUM_EVAL_IMAGES = 50000/NUM_EVAL_IMAGES = 370/g' "
CMD="$CMD | sed 's/^LABEL_CLASSES = 1000/LABEL_CLASSES = 5/g' "
CMD="$CMD > tmp/trainer/$FILE"
eval $CMD
done
cp imgclass/setup.py tmp
find tmp
%bash
TOPDIR=gs://${BUCKET}/tpu/resnet
OUTDIR=${TOPDIR}/trained
JOBNAME=imgclass_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR # Comment out this line to continue training from the last time
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.resnet_main \
--package-path=$(pwd)/tmp/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_TPU \
--runtime-version=$TFVERSION \
-- \
--data_dir=${TOPDIR}/data \
--model_dir=${OUTDIR} \
--resnet_depth=18 \
--train_batch_size=128 --eval_batch_size=32 --skip_host_call=True \
--train_steps=1000 \
--export_dir=${OUTDIR}/export
###Output
_____no_output_____
###Markdown
The above training job will take 12 minutes. Wait for the job to finish before you proceed. Navigate to [Cloud ML Engine section of GCP web console](https://console.cloud.google.com/mlengine) to monitor job progress. FAILS when exporting
###Code
%bash
gsutil ls -l gs://${BUCKET}/tpu/resnet/trained/
###Output
_____no_output_____
###Markdown
Deploying and predicting with model [doesn't work yet]Deploy the model:
###Code
%bash
MODEL_NAME="flowers"
MODEL_VERSION=amoeba
MODEL_LOCATION=gs://${BUCKET}/tpu/amoeba/trained/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ml-engine versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
#gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION}
###Output
Deleting and deploying flowers amoeba from gs://cloud-training-demos-ml/tpu/amoeba/trained/ ... this will take a few minutes
###Markdown
To predict with the model, let's take one of the example images that is available on Google Cloud Storage
###Code
%writefile test.json
{"imageurl": "gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg"}
###Output
_____no_output_____
###Markdown
Send it to the prediction service
###Code
%bash
gcloud ml-engine predict --model=flowers --version=${MODEL_TYPE} --json-instances=./test.json
###Output
_____no_output_____
###Markdown
Image Classification from scratch with TPUs on Cloud ML Engine using ResNetThis notebook demonstrates how to do image classification from scratch on a flowers dataset using TPUs and the resnet trainer.
###Code
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.9'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
###Output
_____no_output_____
###Markdown
Convert JPEG images to TensorFlow RecordsMy dataset consists of JPEG images in Google Cloud Storage. I have two CSV files that are formatted as follows: image-name, categoryInstead of reading the images from JPEG each time, we'll convert the JPEG data and store it as TF Records.
###Code
%%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | head -5 > /tmp/input.csv
cat /tmp/input.csv
%%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt
cat /tmp/labels.txt
###Output
_____no_output_____
###Markdown
Clone the TPU repoLet's git clone the repo and get the preprocessing and model files. The model code has imports of the form:import resnet_model as model_libWe will need to change this to:from . import resnet_model as model_lib
###Code
%%writefile copy_resnet_files.sh
#!/bin/bash
rm -rf tpu
git clone https://github.com/tensorflow/tpu
cd tpu
TFVERSION=$1
echo "Switching to version r$TFVERSION"
git checkout r$TFVERSION
cd ..
MODELCODE=tpu/models/official/resnet
OUTDIR=mymodel
rm -rf $OUTDIR
# preprocessing
cp -r imgclass $OUTDIR # brings in setup.py and __init__.py
cp tpu/tools/datasets/jpeg_to_tf_record.py $OUTDIR/trainer/preprocess.py
# model: fix imports
for FILE in $(ls -p $MODELCODE | grep -v /); do
CMD="cat $MODELCODE/$FILE "
for f2 in $(ls -p $MODELCODE | grep -v /); do
MODULE=`echo $f2 | sed 's/.py//g'`
CMD="$CMD | sed 's/^import ${MODULE}/from . import ${MODULE}/g' "
done
CMD="$CMD > $OUTDIR/trainer/$FILE"
eval $CMD
done
find $OUTDIR
echo "Finished copying files into $OUTDIR"
!bash ./copy_resnet_files.sh $TFVERSION
###Output
_____no_output_____
###Markdown
Enable TPU service accountAllow Cloud ML Engine to access the TPU and bill to your project
###Code
%%writefile enable_tpu_mlengine.sh
SVC_ACCOUNT=$(curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://ml.googleapis.com/v1/projects/${PROJECT}:getConfig \
| grep tpuServiceAccount | tr '"' ' ' | awk '{print $3}' )
echo "Enabling TPU service account $SVC_ACCOUNT to act as Cloud ML Service Agent"
gcloud projects add-iam-policy-binding $PROJECT \
--member serviceAccount:$SVC_ACCOUNT --role roles/ml.serviceAgent
echo "Done"
!bash ./enable_tpu_mlengine.sh
###Output
_____no_output_____
###Markdown
Try preprocessing locally
###Code
%%bash
export PYTHONPATH=${PYTHONPATH}:${PWD}/mymodel
rm -rf /tmp/out
python -m trainer.preprocess \
--train_csv /tmp/input.csv \
--validation_csv /tmp/input.csv \
--labels_file /tmp/labels.txt \
--project_id $PROJECT \
--output_dir /tmp/out --runner=DirectRunner
!ls -l /tmp/out
###Output
_____no_output_____
###Markdown
Now run it over full training and evaluation datasets. This will happen in Cloud Dataflow.
###Code
%%bash
export PYTHONPATH=${PYTHONPATH}:${PWD}/mymodel
gsutil -m rm -rf gs://${BUCKET}/tpu/resnet/data
python -m trainer.preprocess \
--train_csv gs://cloud-ml-data/img/flower_photos/train_set.csv \
--validation_csv gs://cloud-ml-data/img/flower_photos/eval_set.csv \
--labels_file /tmp/labels.txt \
--project_id $PROJECT \
--output_dir gs://${BUCKET}/tpu/resnet/data
###Output
_____no_output_____
###Markdown
The above preprocessing step will take 15-20 minutes. Wait for the job to finish before you proceed. Navigate to [Cloud Dataflow section of GCP web console](https://console.cloud.google.com/dataflow) to monitor job progress. You will see something like this Alternately, you can simply copy my already preprocessed files and proceed to the next step:gsutil -m cp gs://cloud-training-demos/tpu/resnet/data/* gs://${BUCKET}/tpu/resnet/copied_data
###Code
%%bash
gsutil ls gs://${BUCKET}/tpu/resnet/data
###Output
_____no_output_____
###Markdown
Train on the Cloud
###Code
%%bash
echo -n "--num_train_images=$(gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | wc -l) "
echo -n "--num_eval_images=$(gsutil cat gs://cloud-ml-data/img/flower_photos/eval_set.csv | wc -l) "
echo "--num_label_classes=$(cat /tmp/labels.txt | wc -l)"
%%bash
TOPDIR=gs://${BUCKET}/tpu/resnet
OUTDIR=${TOPDIR}/trained
JOBNAME=imgclass_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR # Comment out this line to continue training from the last time
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.resnet_main \
--package-path=$(pwd)/mymodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_TPU \
--runtime-version=$TFVERSION --python-version=3.5 \
-- \
--data_dir=${TOPDIR}/data \
--model_dir=${OUTDIR} \
--resnet_depth=18 \
--train_batch_size=128 --eval_batch_size=32 --skip_host_call=True \
--steps_per_eval=250 --train_steps=1000 \
--num_train_images=3300 --num_eval_images=370 --num_label_classes=5 \
--export_dir=${OUTDIR}/export
###Output
_____no_output_____
###Markdown
The above training job will take 15-20 minutes. Wait for the job to finish before you proceed. Navigate to [Cloud ML Engine section of GCP web console](https://console.cloud.google.com/mlengine) to monitor job progress.The model should finish with a 80-83% accuracy (results will vary):```Eval results: {'global_step': 1000, 'loss': 0.7359053, 'top_1_accuracy': 0.82954544, 'top_5_accuracy': 1.0}```
###Code
%%bash
gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/
###Output
_____no_output_____
###Markdown
You can look at the training charts with TensorBoard:
###Code
OUTDIR = 'gs://{}/tpu/resnet/trained/'.format(BUCKET)
from google.datalab.ml import TensorBoard
TensorBoard().start(OUTDIR)
TensorBoard().stop(11531)
print("Stopped Tensorboard")
###Output
_____no_output_____
###Markdown
These were the charts I got (I set smoothing to be zero):As you can see, the final blue dot (eval) is quite close to the lowest training loss, indicating that the model hasn't overfit. The top_1 accuracy on the evaluation dataset, however, is 80% which isn't that great. More data would help. Deploying and predicting with modelDeploy the model:
###Code
%%bash
MODEL_NAME="flowers"
MODEL_VERSION=resnet
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/ | tail -1)
echo "Deleting/deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
# comment/uncomment the appropriate line to run. The first time around, you will need only the two create calls
# But during development, you might need to replace a version by deleting the version and creating it again
#gcloud ml-engine versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
###Output
_____no_output_____
###Markdown
We can use saved_model_cli to find out what inputs the model expects:
###Code
%%bash
saved_model_cli show --dir $(gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/ | tail -1) --tag_set serve --signature_def serving_default
###Output
_____no_output_____
###Markdown
As you can see, the model expects image_bytes. This is typically base64 encoded To predict with the model, let's take one of the example images that is available on Google Cloud Storage and convert it to a base64-encoded array
###Code
import base64, sys, json
import tensorflow as tf
import io
with tf.gfile.GFile('gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg', 'rb') as ifp:
with io.open('test.json', 'w') as ofp:
image_data = ifp.read()
img = base64.b64encode(image_data).decode('utf-8')
json.dump({"image_bytes": {"b64": img}}, ofp)
!ls -l test.json
###Output
_____no_output_____
###Markdown
Send it to the prediction service
###Code
%%bash
gcloud ml-engine predict --model=flowers --version=resnet --json-instances=./test.json
###Output
_____no_output_____
###Markdown
What does CLASS no. 3 correspond to? (remember that classes is 0-based)
###Code
%%bash
head -4 /tmp/labels.txt | tail -1
###Output
_____no_output_____
###Markdown
Here's how you would invoke those predictions without using gcloud
###Code
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import base64, sys, json
import tensorflow as tf
with tf.gfile.GFile('gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg', 'rb') as ifp:
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
request_data = {'instances':
[
{"image_bytes": {"b64": base64.b64encode(ifp.read()).decode('utf-8')}}
]}
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'flowers', 'resnet')
response = api.projects().predict(body=request_data, name=parent).execute()
print("response={0}".format(response))
###Output
_____no_output_____
###Markdown
Image Classification from scratch with TPUs on Cloud ML Engine using ResNetThis notebook demonstrates how to do image classification from scratch on a flowers dataset using TPUs and the resnet trainer.
###Code
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.9'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
###Output
_____no_output_____
###Markdown
Convert JPEG images to TensorFlow RecordsMy dataset consists of JPEG images in Google Cloud Storage. I have two CSV files that are formatted as follows: image-name, categoryInstead of reading the images from JPEG each time, we'll convert the JPEG data and store it as TF Records.
###Code
%%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | head -5 > /tmp/input.csv
cat /tmp/input.csv
%%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt
cat /tmp/labels.txt
###Output
_____no_output_____
###Markdown
Clone the TPU repoLet's git clone the repo and get the preprocessing and model files. The model code has imports of the form:import resnet_model as model_libWe will need to change this to:from . import resnet_model as model_lib
###Code
%%writefile copy_resnet_files.sh
#!/bin/bash
rm -rf tpu
git clone https://github.com/tensorflow/tpu
cd tpu
TFVERSION=$1
echo "Switching to version r$TFVERSION"
git checkout r$TFVERSION
cd ..
MODELCODE=tpu/models/official/resnet
OUTDIR=mymodel
rm -rf $OUTDIR
# preprocessing
cp -r imgclass $OUTDIR # brings in setup.py and __init__.py
cp tpu/tools/datasets/jpeg_to_tf_record.py $OUTDIR/trainer/preprocess.py
# model: fix imports
for FILE in $(ls -p $MODELCODE | grep -v /); do
CMD="cat $MODELCODE/$FILE "
for f2 in $(ls -p $MODELCODE | grep -v /); do
MODULE=`echo $f2 | sed 's/.py//g'`
CMD="$CMD | sed 's/^import ${MODULE}/from . import ${MODULE}/g' "
done
CMD="$CMD > $OUTDIR/trainer/$FILE"
eval $CMD
done
find $OUTDIR
echo "Finished copying files into $OUTDIR"
!bash ./copy_resnet_files.sh $TFVERSION
###Output
_____no_output_____
###Markdown
Enable TPU service accountAllow Cloud ML Engine to access the TPU and bill to your project
###Code
%%writefile enable_tpu_mlengine.sh
SVC_ACCOUNT=$(curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://ml.googleapis.com/v1/projects/${PROJECT}:getConfig \
| grep tpuServiceAccount | tr '"' ' ' | awk '{print $3}' )
echo "Enabling TPU service account $SVC_ACCOUNT to act as Cloud ML Service Agent"
gcloud projects add-iam-policy-binding $PROJECT \
--member serviceAccount:$SVC_ACCOUNT --role roles/ml.serviceAgent
echo "Done"
!bash ./enable_tpu_mlengine.sh
###Output
_____no_output_____
###Markdown
Try preprocessing locally
###Code
%%bash
export PYTHONPATH=${PYTHONPATH}:${PWD}/mymodel
rm -rf /tmp/out
python -m trainer.preprocess \
--train_csv /tmp/input.csv \
--validation_csv /tmp/input.csv \
--labels_file /tmp/labels.txt \
--project_id $PROJECT \
--output_dir /tmp/out --runner=DirectRunner
!ls -l /tmp/out
###Output
_____no_output_____
###Markdown
Now run it over full training and evaluation datasets. This will happen in Cloud Dataflow.
###Code
%%bash
export PYTHONPATH=${PYTHONPATH}:${PWD}/mymodel
gsutil -m rm -rf gs://${BUCKET}/tpu/resnet/data
python -m trainer.preprocess \
--train_csv gs://cloud-ml-data/img/flower_photos/train_set.csv \
--validation_csv gs://cloud-ml-data/img/flower_photos/eval_set.csv \
--labels_file /tmp/labels.txt \
--project_id $PROJECT \
--output_dir gs://${BUCKET}/tpu/resnet/data
###Output
_____no_output_____
###Markdown
The above preprocessing step will take 15-20 minutes. Wait for the job to finish before you proceed. Navigate to [Cloud Dataflow section of GCP web console](https://console.cloud.google.com/dataflow) to monitor job progress. You will see something like this Alternately, you can simply copy my already preprocessed files and proceed to the next step:gsutil -m cp gs://cloud-training-demos/tpu/resnet/data/* gs://${BUCKET}/tpu/resnet/copied_data
###Code
%%bash
gsutil ls gs://${BUCKET}/tpu/resnet/data
###Output
_____no_output_____
###Markdown
Train on the Cloud
###Code
%%bash
echo -n "--num_train_images=$(gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | wc -l) "
echo -n "--num_eval_images=$(gsutil cat gs://cloud-ml-data/img/flower_photos/eval_set.csv | wc -l) "
echo "--num_label_classes=$(cat /tmp/labels.txt | wc -l)"
%%bash
TOPDIR=gs://${BUCKET}/tpu/resnet
OUTDIR=${TOPDIR}/trained
JOBNAME=imgclass_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR # Comment out this line to continue training from the last time
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.resnet_main \
--package-path=$(pwd)/mymodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_TPU \
--runtime-version=$TFVERSION --python-version=3.5 \
-- \
--data_dir=${TOPDIR}/data \
--model_dir=${OUTDIR} \
--resnet_depth=18 \
--train_batch_size=128 --eval_batch_size=32 --skip_host_call=True \
--steps_per_eval=250 --train_steps=1000 \
--num_train_images=3300 --num_eval_images=370 --num_label_classes=5 \
--export_dir=${OUTDIR}/export
###Output
_____no_output_____
###Markdown
The above training job will take 15-20 minutes. Wait for the job to finish before you proceed. Navigate to [Cloud ML Engine section of GCP web console](https://console.cloud.google.com/mlengine) to monitor job progress.The model should finish with a 80-83% accuracy (results will vary):```Eval results: {'global_step': 1000, 'loss': 0.7359053, 'top_1_accuracy': 0.82954544, 'top_5_accuracy': 1.0}```
###Code
%%bash
gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/
###Output
_____no_output_____
###Markdown
You can look at the training charts with TensorBoard:
###Code
OUTDIR = 'gs://{}/tpu/resnet/trained/'.format(BUCKET)
from google.datalab.ml import TensorBoard
TensorBoard().start(OUTDIR)
TensorBoard().stop(11531)
print("Stopped Tensorboard")
###Output
_____no_output_____
###Markdown
These were the charts I got (I set smoothing to be zero):As you can see, the final blue dot (eval) is quite close to the lowest training loss, indicating that the model hasn't overfit. The top_1 accuracy on the evaluation dataset, however, is 80% which isn't that great. More data would help. Deploying and predicting with modelDeploy the model:
###Code
%%bash
MODEL_NAME="flowers"
MODEL_VERSION=resnet
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/ | tail -1)
echo "Deleting/deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
# comment/uncomment the appropriate line to run. The first time around, you will need only the two create calls
# But during development, you might need to replace a version by deleting the version and creating it again
#gcloud ml-engine versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
###Output
_____no_output_____
###Markdown
We can use saved_model_cli to find out what inputs the model expects:
###Code
%%bash
saved_model_cli show --dir $(gsutil ls gs://${BUCKET}/tpu/resnet/trained/export/ | tail -1) --tag_set serve --signature_def serving_default
###Output
_____no_output_____
###Markdown
As you can see, the model expects image_bytes. This is typically base64 encoded To predict with the model, let's take one of the example images that is available on Google Cloud Storage and convert it to a base64-encoded array
###Code
import base64, sys, json
import tensorflow as tf
import io
with tf.gfile.GFile('gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg', 'rb') as ifp:
with io.open('test.json', 'w') as ofp:
image_data = ifp.read()
img = base64.b64encode(image_data).decode('utf-8')
json.dump({"image_bytes": {"b64": img}}, ofp)
!ls -l test.json
###Output
_____no_output_____
###Markdown
Send it to the prediction service
###Code
%%bash
gcloud ml-engine predict --model=flowers --version=resnet --json-instances=./test.json
###Output
_____no_output_____
###Markdown
What does CLASS no. 3 correspond to? (remember that classes is 0-based)
###Code
%%bash
head -4 /tmp/labels.txt | tail -1
###Output
_____no_output_____
###Markdown
Here's how you would invoke those predictions without using gcloud
###Code
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import base64, sys, json
import tensorflow as tf
with tf.gfile.GFile('gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg', 'rb') as ifp:
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
request_data = {'instances':
[
{"image_bytes": {"b64": base64.b64encode(ifp.read()).decode('utf-8')}}
]}
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'flowers', 'resnet')
response = api.projects().predict(body=request_data, name=parent).execute()
print("response={0}".format(response))
###Output
_____no_output_____ |
ml4trading-2ed/18_convolutional_neural_nets/06_cnn_for_trading_features_to_clustered_image_format.ipynb | ###Markdown
CNN for Trading - Part 2: From Time-Series Features to Clustered Images To exploit the grid-like structure of time-series data, we can use CNN architectures for univariate and multivariate time series. In the latter case, we consider different time series as channels, similar to the different color signals.An alternative approach converts a time series of alpha factors into a two-dimensional format to leverage the ability of CNNs to detect local patterns. [Sezer and Ozbayoglu (2018)](https://www.researchgate.net/publication/324802031_Algorithmic_Financial_Trading_with_Deep_Convolutional_Neural_Networks_Time_Series_to_Image_Conversion_Approach) propose CNN-TA, which computes 15 technical indicators for different intervals and uses hierarchical clustering (see Chapter 13, Data-Driven Risk Factors and Asset Allocation with Unsupervised Learning) to locate indicators that behave similarly close to each other in a two-dimensional grid.The authors train a CNN similar to the CIFAR-10 example we used earlier to predict whether to buy, hold, or sell an asset on a given day. They compare the CNN performance to "buy-and-hold" and other models and find that it outperforms all alternatives using daily price series for Dow 30 stocks and the nine most-traded ETFs over the 2007-2017 time period.The section on *CNN for Trading* consists of three notebooks that experiment with this approach using daily US equity price data. They demonstrate 1. How to compute relevant financial features2. How to convert a similar set of indicators into image format and cluster them by similarity3. How to train a CNN to predict daily returns and evaluate a simple long-short strategy based on the resulting signals. Selecting and Clustering Features The next steps that we will tackle in this notebook are 1. Select the 15 most relevant features from the 20 candidates to fill the 15×15 input grid.2. Apply hierarchical clustering to identify features that behave similarly and order the columns and the rows of the grid accordingly. Imports & Settings
###Code
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
from pathlib import Path
import pandas as pd
from tqdm import tqdm
from scipy.spatial.distance import pdist
from scipy.cluster.hierarchy import dendrogram, linkage, cophenet
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import mutual_info_regression
import matplotlib.pyplot as plt
import seaborn as sns
MONTH = 21
YEAR = 12 * MONTH
START = '2001-01-01'
END = '2017-12-31'
sns.set_style('white')
idx = pd.IndexSlice
results_path = Path('results', 'cnn_for_trading')
if not results_path.exists():
results_path.mkdir(parents=True)
###Output
_____no_output_____
###Markdown
Load Model Data
###Code
with pd.HDFStore('data.h5') as store:
features = store.get('features')
targets = store.get('targets')
features.info()
targets.info()
###Output
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 2378728 entries, ('A', Timestamp('2001-01-02 00:00:00')) to ('ZTS', Timestamp('2017-12-29 00:00:00'))
Data columns (total 4 columns):
# Column Dtype
--- ------ -----
0 r01_fwd float64
1 r01dec_fwd float64
2 r05_fwd float64
3 r05dec_fwd float64
dtypes: float64(4)
memory usage: 81.8+ MB
###Markdown
Select Features using Mutual Information To this end, we estimate the mutual information for each indicator and the 15 intervals with respect to our target, the one-day forward returns. As discussed in Chapter 4, Financial Feature Engineering – How to Research Alpha Factors, scikit-learn provides the `mutual_info_regression()` function that makes this straightforward, albeit time-consuming and memory-intensive. To accelerate the process, we randomly sample 100,000 observations:
###Code
mi = {}
for t in tqdm([1, 5]):
target = f'r{t:02}_fwd'
# sample a smaller number to speed up the computation
df = features.join(targets[target]).dropna().sample(n=100000)
X = df.drop(target, axis=1)
y = df[target]
mi[t] = pd.Series(mutual_info_regression(X=X, y=y),
index=X.columns).sort_values(ascending=False)
mutual_info = pd.DataFrame(mi)
mutual_info.to_hdf('data.h5', 'mutual_info')
mutual_info = pd.read_hdf('data.h5', 'mutual_info')
mi_by_indicator = (mutual_info.groupby(mutual_info.
index.to_series()
.str.split('_').str[-1])
.mean()
.rank(ascending=False)
.sort_values(by=1))
mutual_info.boxplot()
sns.despine();
###Output
_____no_output_____
###Markdown
The below figure shows the mutual information, averaged across the 15 intervals for each indicator. NATR, PPO, and Bollinger Bands are most important from this metric's perspective:
###Code
(mutual_info.groupby(mutual_info.index.to_series().str.split('_').str[-1])[1]
.mean()
.sort_values().plot.barh(title='Mutual Information with 1-Day Forward Returns'))
sns.despine()
plt.tight_layout()
plt.savefig(results_path / 'mutual_info_cnn_features', dpi=300)
best_features = mi_by_indicator.head(15).index
size = len(best_features)
###Output
_____no_output_____
###Markdown
Hierarchical Feature Clustering
###Code
features = pd.concat([features.filter(like=f'_{f}') for f in best_features], axis=1)
new_cols = {}
for feature in best_features:
fnames = sorted(features.filter(like=f'_{feature}').columns.tolist())
renamed = [f'{i:02}_{feature}' for i in range(1, len(fnames)+ 1)]
new_cols.update(dict(zip(fnames, renamed)))
features = features.rename(columns=new_cols).sort_index(1)
features.info()
###Output
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 2378728 entries, ('A', Timestamp('2001-01-02 00:00:00')) to ('ZTS', Timestamp('2017-12-29 00:00:00'))
Columns: 225 entries, 01_BBH to 15_WMA
dtypes: float64(225)
memory usage: 4.1+ GB
###Markdown
Hierarchical Clustering As discussed in the first section of this chapter, CNNs rely on the locality of relevant patterns that is typically found in images where nearby pixels are closely related and changes from one pixel to the next are often gradual.To organize our indicators in a similar fashion, we will follow Sezer and Ozbayoglu's approach of applying hierarchical clustering. The goal is to identify features that behave similarly and order the columns and the rows of the grid accordingly.We can build on SciPy's `pairwise_distance()`, `linkage()`, and `dendrogram()` functions that we introduced in *Chapter 13, Data-Driven Risk Factors and Asset Allocation with Unsupervised Learning* alongside other forms of clustering. We create a helper function that standardizes the input column-wise to avoid distorting distances among features due to differences in scale, and use the Ward criterion that merges clusters to minimize variance. The functionreturns the order of the leaf nodes in the dendrogram that in turn displays the successive formation of larger clusters:
###Code
def cluster_features(data, labels, ax, title):
data = StandardScaler().fit_transform(data)
pairwise_distance = pdist(data)
Z = linkage(data, 'ward')
c, coph_dists = cophenet(Z, pairwise_distance)
dend = dendrogram(Z,
labels=labels,
orientation='top',
leaf_rotation=0.,
leaf_font_size=8.,
ax=ax)
ax.set_title(title)
return dend['ivl']
###Output
_____no_output_____
###Markdown
To obtain the optimized order of technical indicators in the columns and the different intervals in the rows, we use NumPy's `.reshape()` method to ensure that the dimension we would like to cluster appears in the columns of the two-dimensional array we pass to `cluster_features()`.
###Code
fig, axes = plt.subplots(figsize=(15, 4), ncols=2)
labels = sorted(best_features)
title = 'Column Features: Indicators'
col_order = cluster_features(features.dropna().values.reshape(-1, 15).T,
labels,
axes[0],
title)
labels = list(range(1, 16))
title = 'Row Features: Indicator Parameters'
row_order = cluster_features(
features.dropna().values.reshape(-1, 15, 15).transpose((0, 2, 1)).reshape(-1, 15).T,
labels, axes[1], title)
axes[0].set_xlabel('Indicators')
axes[1].set_xlabel('Parameters')
sns.despine()
fig.tight_layout()
fig.savefig(results_path / 'cnn_clustering', dpi=300)
###Output
_____no_output_____
###Markdown
We reorder the features accordingly and store the result as inputs for the CNN that we will create in the next step.
###Code
feature_order = [f'{i:02}_{j}' for i in row_order for j in col_order]
features = features.loc[:, feature_order]
features = features.apply(pd.to_numeric, downcast='float')
features.info()
features.to_hdf('data.h5', 'img_data')
###Output
_____no_output_____ |
Machine Learning/Course files/mean_median_mode/MeanMedianMode.ipynb | ###Markdown
Mean, Median, Mode, and introducing NumPy Mean vs. Median Let's create some fake income data, centered around 27,000 with a normal distribution and standard deviation of 15,000, with 10,000 data points. (We'll discuss those terms more later, if you're not familiar with them.)Then, compute the mean (average) - it should be close to 27,000:
###Code
import numpy as np
incomes = np.random.normal(27000, 15000, 10000)
print(incomes)
np.mean(incomes)
###Output
[ 8619.81612548 5920.07345543 26983.92373813 ... 8106.56482905
49970.70844095 31989.46931024]
###Markdown
We can segment the income data into 50 buckets, and plot it as a histogram:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(incomes, 50)
plt.show()
###Output
_____no_output_____
###Markdown
Now compute the median - since we have a nice, even distribution it too should be close to 27,000:
###Code
print(np.median(incomes))
print(np.mean(incomes))
###Output
26989.72733416497
26903.231823171747
###Markdown
Now we'll add Donald Trump into the mix. Darn income inequality!
###Code
incomes = np.append(incomes, [1000,1000,1000])
len(incomes)
###Output
_____no_output_____
###Markdown
The median won't change much, but the mean does:
###Code
np.median(incomes)
np.mean(incomes)
###Output
_____no_output_____
###Markdown
Mode Next, let's generate some fake age data for 500 people:
###Code
ages = np.random.randint(18, high=90, size=500)
ages
from scipy import stats
stats.mode(ages)
###Output
_____no_output_____ |
docs/examples/eda_peaks/eda_peaks.ipynb | ###Markdown
Analyze Electrodermal Activity (EDA) This example can be referenced by [citing the package](https://neuropsychology.github.io/NeuroKit/cite_us.html).This example shows how to use NeuroKit2 to extract the features from **Electrodermal Activity (EDA)**.
###Code
# Load the NeuroKit package and other useful packages
import neurokit2 as nk
import matplotlib.pyplot as plt
# This "decorative" cell should be hidden from the docs once this is implemented:
# https://github.com/microsoft/vscode-jupyter/issues/1182
plt.rcParams['figure.figsize'] = [15, 5] # Bigger images
plt.rcParams['font.size']= 14
###Output
_____no_output_____
###Markdown
Extract the cleaned EDA signal In this example, we will use a simulated EDA signal. However, you can use any signal you have generated (for instance, extracted from the dataframe using [read_acqknowledge()](https://neuropsychology.github.io/NeuroKit/functions/data.htmlread-acqknowledge).
###Code
# Simulate 10 seconds of EDA Signal (recorded at 250 samples / second)
eda_signal = nk.eda_simulate(duration=10, sampling_rate=250, scr_number=3, drift=0.01)
###Output
_____no_output_____
###Markdown
Once you have a raw EDA signal in the shape of a vector (i.e., a one-dimensional array), or a list, you can use [eda_process()](https://neuropsychology.github.io/NeuroKit/functions/eda.htmleda-process) to process it.
###Code
# Process the raw EDA signal
signals, info = nk.eda_process(eda_signal, sampling_rate=250)
###Output
_____no_output_____
###Markdown
*Note: It is critical that you specify the correct sampling rate of your signal throughout many processing functions, as this allows NeuroKit to have a time reference.* This function outputs two elements, a *dataframe* containing the different signals (e.g., the raw signal, clean signal, SCR samples marking the different features etc.), and a *dictionary* containing information about the Skin Conductance Response (SCR) peaks (e.g., onsets, peak amplitude etc.). Locate Skin Conductance Response (SCR) features The processing function does two important things for our purpose: Firstly, it cleans the signal. Secondly, it detects the location of 1) peak onsets, 2) peak amplitude, and 3) half-recovery time. Let's extract these from the output.
###Code
# Extract clean EDA and SCR features
cleaned = signals["EDA_Clean"]
features = [info["SCR_Onsets"], info["SCR_Peaks"], info["SCR_Recovery"]]
###Output
_____no_output_____
###Markdown
We can now visualize the location of the peak onsets, the peak amplitude, as well as the half-recovery time points in the cleaned EDA signal, respectively marked by the red dashed line, blue dashed line, and orange dashed line.
###Code
# Visualize SCR features in cleaned EDA signal
plot = nk.events_plot(features, cleaned, color=['red', 'blue', 'orange'])
###Output
_____no_output_____
###Markdown
Decompose EDA into Phasic and Tonic components We can also decompose the EDA signal into its phasic and tonic components, or more specifically, the ***Phasic Skin Conductance Response (SCR)*** and the ***Tonic Skin Conductance Level (SCL)*** respectively.The SCR represents the stimulus-dependent fast changing signal whereas the SCL is slow-changing and continuous. Separating these two signals helps to provide a more accurate estimation of the true SCR amplitude.
###Code
# Filter phasic and tonic components
data = nk.eda_phasic(nk.standardize(eda_signal), sampling_rate=250)
###Output
_____no_output_____
###Markdown
*Note: here we **standardized** the raw EDA signal before the decomposition, which can be useful in the presence of high inter-individual variations.* We can now add the raw signal to the dataframe containing the two signals, and plot them!
###Code
data["EDA_Raw"] = eda_signal # Add raw signal
data.plot()
###Output
_____no_output_____
###Markdown
Quick Plot You can obtain all of these features by using the [eda_plot()](https://neuropsychology.github.io/NeuroKit/functions/eda.htmleda-plot) function on the dataframe of processed EDA.
###Code
# Plot EDA signal
nk.eda_plot(signals)
###Output
_____no_output_____ |
ConvMixer_public.ipynb | ###Markdown
Image classification with the latest Conv-Mixer models**Author:** [LUU THIEN XUAN](https://www.linkedin.com/in/thienxuanluu/)**Date created:** 2021/10/13**Last modified:** 2021/10/13**Description:** Implementing the Conv-Mixer models for CIFAR-100 image classification.https://openreview.net/forum?id=TVHS5Y4dNvM IntroductionThis example implements the recent paper **Patches Are All You Need**, demonstrated on the CIFAR-100 dataset:This example requires TensorFlow 2.4 or higher, as well as[TensorFlow Addons](https://www.tensorflow.org/addons/overview),which can be installed using the following command:```shellpip install -U tensorflow-addons``` Setup
###Code
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
!pip install -U tensorflow-addons
import tensorflow_addons as tfa
print('tf:',tf.__version__)
###Output
Requirement already satisfied: tensorflow-addons in /usr/local/lib/python3.7/dist-packages (0.14.0)
Requirement already satisfied: typeguard>=2.7 in /usr/local/lib/python3.7/dist-packages (from tensorflow-addons) (2.7.1)
tf: 2.6.0
###Markdown
Prepare the data
###Code
num_classes = 100
input_shape = (32, 32, 3)
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data()
print(f"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}")
print(f"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}")
###Output
x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 1)
x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 1)
###Markdown
Configure the hyperparameters
###Code
weight_decay = 0.0001
batch_size = 64
num_epochs = 150
dropout_rate = 0.2
image_size = 64 # We'll resize input images to this size.
cmlp_dim = 1024
cmlp_depth = 20
cmlp_kernel = 9
cmlp_patch = 14
###Output
_____no_output_____
###Markdown
Build a classification modelWe implement a method that builds a classifier given the processing blocks.
###Code
def build_classifier(blocks):
inputs = layers.Input(shape=input_shape)
# Augment data.
augmented = data_augmentation(inputs)
# Process x using the module blocks.
x = blocks(augmented)
# Apply global average pooling
representation = layers.GlobalAveragePooling2D()(x)
# Apply dropout.
representation = layers.Dropout(rate=dropout_rate)(representation)
# Compute logits outputs.
logits = layers.Dense(num_classes)(representation)
# Create the Keras model.
return keras.Model(inputs=inputs, outputs=logits)
###Output
_____no_output_____
###Markdown
Define an experimentWe implement a utility function to compile, train, and evaluate a given model.
###Code
def run_experiment(model):
# Create Adam optimizer with weight decay.
optimizer = tfa.optimizers.AdamW(
learning_rate=learning_rate, weight_decay=weight_decay,)
# Compile the model.
model.compile(
optimizer=optimizer,
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[
keras.metrics.SparseCategoricalAccuracy(name="acc"),
keras.metrics.SparseTopKCategoricalAccuracy(5, name="top5-acc"),
],)
# Create a learning rate scheduler callback.
reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor="val_loss",factor=0.5,patience=5)
# Create an early stopping callback.
early_stopping = tf.keras.callbacks.EarlyStopping(monitor="val_loss",patience=10,restore_best_weights=True)
# Fit the model.
history = model.fit(
x=x_train,
y=y_train,
batch_size=batch_size,
epochs=num_epochs,
validation_split=0.1,
callbacks=[early_stopping, reduce_lr],
)
_, accuracy, top_5_accuracy = model.evaluate(x_test, y_test)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%")
# Return history to plot learning curves.
return history
###Output
_____no_output_____
###Markdown
Use data augmentation
###Code
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.Normalization(),
layers.experimental.preprocessing.Resizing(image_size, image_size),
layers.experimental.preprocessing.RandomFlip("horizontal"),
layers.experimental.preprocessing.RandomZoom(
height_factor=0.2, width_factor=0.2
),
],
name="data_augmentation",
)
# Compute the mean and the variance of the training data for normalization.
data_augmentation.layers[0].adapt(x_train)
###Output
_____no_output_____
###Markdown
ConvMixer model  Implement the ConvMixer module
###Code
class Residual(layers.Layer):
def __init__(self, fn, *args, **kwargs):
super(Residual, self).__init__(*args, **kwargs)
self.fn = fn
@tf.function(jit_compile=True)
def call(self, x):
return self.fn(x) + x
class ConvMixer(layers.Layer):
def __init__(self, dim=None, kernel_size=9, *args, **kwargs):
super(ConvMixer, self).__init__(*args, **kwargs)
self.dim = dim
self.kernel_size = kernel_size
self.cmixer = keras.Sequential([
layers.Conv2D(self.dim, self.kernel_size, groups=self.dim, padding="same"),
tfa.layers.GELU(),
layers.BatchNormalization()])
self.Conv2d = layers.Conv2D(dim, kernel_size=1)
self.GELU = tfa.layers.GELU()
self.BatchNorm2d = layers.BatchNormalization()
@tf.function(jit_compile=True)
def call(self, inputs):
x = Residual(self.cmixer)(inputs)
x = self.Conv2d(x)
x = self.GELU(x)
x = self.BatchNorm2d(x)
return x
###Output
_____no_output_____
###Markdown
Build, train, and evaluate the ConvMixer model
###Code
cmlp_blocks = keras.Sequential(
[
layers.Conv2D(cmlp_dim, kernel_size=cmlp_patch, strides=cmlp_patch),
tfa.layers.GELU(),
layers.BatchNormalization(),
keras.Sequential(
[
ConvMixer(dim=cmlp_dim, kernel_size=cmlp_kernel) for _ in range(cmlp_depth)
])
])
learning_rate = 0.003
cmlp_classifier = build_classifier(cmlp_blocks)
history = run_experiment(cmlp_classifier)
###Output
Epoch 1/150
704/704 [==============================] - 136s 166ms/step - loss: 5.4079 - acc: 0.0225 - top5-acc: 0.1031 - val_loss: 4.3419 - val_acc: 0.0282 - val_top5-acc: 0.1264
Epoch 2/150
704/704 [==============================] - 91s 129ms/step - loss: 4.1739 - acc: 0.0480 - top5-acc: 0.1892 - val_loss: 4.0420 - val_acc: 0.0740 - val_top5-acc: 0.2348
Epoch 3/150
704/704 [==============================] - 91s 129ms/step - loss: 3.9851 - acc: 0.0762 - top5-acc: 0.2573 - val_loss: 3.8839 - val_acc: 0.0872 - val_top5-acc: 0.2904
Epoch 4/150
704/704 [==============================] - 91s 129ms/step - loss: 3.8493 - acc: 0.0977 - top5-acc: 0.3049 - val_loss: 3.8368 - val_acc: 0.1074 - val_top5-acc: 0.3094
Epoch 5/150
704/704 [==============================] - 91s 129ms/step - loss: 3.7250 - acc: 0.1211 - top5-acc: 0.3476 - val_loss: 3.6203 - val_acc: 0.1378 - val_top5-acc: 0.3770
Epoch 6/150
704/704 [==============================] - 91s 130ms/step - loss: 3.6124 - acc: 0.1389 - top5-acc: 0.3847 - val_loss: 3.5359 - val_acc: 0.1544 - val_top5-acc: 0.4082
Epoch 7/150
704/704 [==============================] - 91s 129ms/step - loss: 3.5475 - acc: 0.1530 - top5-acc: 0.4035 - val_loss: 3.4940 - val_acc: 0.1568 - val_top5-acc: 0.4222
Epoch 8/150
704/704 [==============================] - 91s 129ms/step - loss: 3.4727 - acc: 0.1632 - top5-acc: 0.4234 - val_loss: 3.4598 - val_acc: 0.1716 - val_top5-acc: 0.4352
Epoch 9/150
704/704 [==============================] - 91s 129ms/step - loss: 3.4051 - acc: 0.1807 - top5-acc: 0.4448 - val_loss: 3.4256 - val_acc: 0.1768 - val_top5-acc: 0.4402
Epoch 10/150
704/704 [==============================] - 91s 129ms/step - loss: 3.3537 - acc: 0.1892 - top5-acc: 0.4575 - val_loss: 3.3578 - val_acc: 0.1900 - val_top5-acc: 0.4544
Epoch 11/150
704/704 [==============================] - 91s 129ms/step - loss: 3.2990 - acc: 0.1984 - top5-acc: 0.4740 - val_loss: 3.2938 - val_acc: 0.2012 - val_top5-acc: 0.4714
Epoch 12/150
704/704 [==============================] - 91s 129ms/step - loss: 3.2575 - acc: 0.2069 - top5-acc: 0.4855 - val_loss: 3.2897 - val_acc: 0.1980 - val_top5-acc: 0.4812
Epoch 13/150
704/704 [==============================] - 91s 129ms/step - loss: 3.2052 - acc: 0.2186 - top5-acc: 0.4980 - val_loss: 3.2288 - val_acc: 0.2098 - val_top5-acc: 0.4950
Epoch 14/150
704/704 [==============================] - 91s 129ms/step - loss: 3.1628 - acc: 0.2226 - top5-acc: 0.5075 - val_loss: 3.2416 - val_acc: 0.2128 - val_top5-acc: 0.4902
Epoch 15/150
704/704 [==============================] - 91s 129ms/step - loss: 3.1373 - acc: 0.2294 - top5-acc: 0.5151 - val_loss: 3.2586 - val_acc: 0.2136 - val_top5-acc: 0.4948
Epoch 16/150
704/704 [==============================] - 91s 129ms/step - loss: 3.0751 - acc: 0.2433 - top5-acc: 0.5305 - val_loss: 3.1311 - val_acc: 0.2306 - val_top5-acc: 0.5226
Epoch 17/150
704/704 [==============================] - 91s 129ms/step - loss: 3.0474 - acc: 0.2466 - top5-acc: 0.5408 - val_loss: 3.1868 - val_acc: 0.2212 - val_top5-acc: 0.5154
Epoch 18/150
704/704 [==============================] - 91s 129ms/step - loss: 3.0124 - acc: 0.2542 - top5-acc: 0.5466 - val_loss: 3.1315 - val_acc: 0.2352 - val_top5-acc: 0.5256
Epoch 19/150
704/704 [==============================] - 91s 130ms/step - loss: 2.9837 - acc: 0.2581 - top5-acc: 0.5566 - val_loss: 3.2168 - val_acc: 0.2196 - val_top5-acc: 0.5052
Epoch 20/150
704/704 [==============================] - 91s 130ms/step - loss: 2.9608 - acc: 0.2623 - top5-acc: 0.5603 - val_loss: 3.0401 - val_acc: 0.2500 - val_top5-acc: 0.5444
Epoch 21/150
704/704 [==============================] - 91s 130ms/step - loss: 2.9187 - acc: 0.2723 - top5-acc: 0.5680 - val_loss: 3.0767 - val_acc: 0.2498 - val_top5-acc: 0.5384
Epoch 22/150
704/704 [==============================] - 91s 129ms/step - loss: 2.8830 - acc: 0.2756 - top5-acc: 0.5779 - val_loss: 3.0085 - val_acc: 0.2610 - val_top5-acc: 0.5500
Epoch 23/150
704/704 [==============================] - 91s 129ms/step - loss: 2.8586 - acc: 0.2832 - top5-acc: 0.5860 - val_loss: 3.0303 - val_acc: 0.2616 - val_top5-acc: 0.5416
Epoch 24/150
704/704 [==============================] - 91s 130ms/step - loss: 2.8317 - acc: 0.2881 - top5-acc: 0.5908 - val_loss: 3.0468 - val_acc: 0.2506 - val_top5-acc: 0.5464
Epoch 25/150
704/704 [==============================] - 91s 129ms/step - loss: 2.7993 - acc: 0.2943 - top5-acc: 0.5976 - val_loss: 2.9959 - val_acc: 0.2642 - val_top5-acc: 0.5542
Epoch 26/150
704/704 [==============================] - 91s 129ms/step - loss: 2.7791 - acc: 0.2993 - top5-acc: 0.6075 - val_loss: 3.0182 - val_acc: 0.2570 - val_top5-acc: 0.5452
Epoch 27/150
704/704 [==============================] - 91s 130ms/step - loss: 2.7617 - acc: 0.3024 - top5-acc: 0.6072 - val_loss: 2.9638 - val_acc: 0.2778 - val_top5-acc: 0.5562
Epoch 28/150
704/704 [==============================] - 91s 130ms/step - loss: 2.7244 - acc: 0.3068 - top5-acc: 0.6174 - val_loss: 2.9329 - val_acc: 0.2760 - val_top5-acc: 0.5774
Epoch 29/150
704/704 [==============================] - 91s 130ms/step - loss: 2.7006 - acc: 0.3122 - top5-acc: 0.6208 - val_loss: 2.9474 - val_acc: 0.2762 - val_top5-acc: 0.5664
Epoch 30/150
704/704 [==============================] - 91s 130ms/step - loss: 2.6844 - acc: 0.3135 - top5-acc: 0.6260 - val_loss: 3.0214 - val_acc: 0.2680 - val_top5-acc: 0.5512
Epoch 31/150
704/704 [==============================] - 91s 130ms/step - loss: 2.6609 - acc: 0.3192 - top5-acc: 0.6293 - val_loss: 2.9802 - val_acc: 0.2742 - val_top5-acc: 0.5630
Epoch 32/150
704/704 [==============================] - 91s 129ms/step - loss: 2.6595 - acc: 0.3201 - top5-acc: 0.6338 - val_loss: 3.0148 - val_acc: 0.2718 - val_top5-acc: 0.5492
Epoch 33/150
704/704 [==============================] - 91s 129ms/step - loss: 2.6213 - acc: 0.3294 - top5-acc: 0.6376 - val_loss: 2.9578 - val_acc: 0.2642 - val_top5-acc: 0.5752
Epoch 34/150
704/704 [==============================] - 91s 129ms/step - loss: 2.4008 - acc: 0.3746 - top5-acc: 0.6893 - val_loss: 2.7840 - val_acc: 0.3078 - val_top5-acc: 0.6012
Epoch 35/150
704/704 [==============================] - 91s 129ms/step - loss: 2.3501 - acc: 0.3859 - top5-acc: 0.6966 - val_loss: 2.8085 - val_acc: 0.3088 - val_top5-acc: 0.6066
Epoch 36/150
704/704 [==============================] - 91s 129ms/step - loss: 2.3255 - acc: 0.3903 - top5-acc: 0.7030 - val_loss: 2.8034 - val_acc: 0.3062 - val_top5-acc: 0.6088
Epoch 37/150
704/704 [==============================] - 91s 129ms/step - loss: 2.3029 - acc: 0.3972 - top5-acc: 0.7087 - val_loss: 2.8071 - val_acc: 0.3054 - val_top5-acc: 0.6006
Epoch 38/150
704/704 [==============================] - 91s 129ms/step - loss: 2.2895 - acc: 0.3961 - top5-acc: 0.7130 - val_loss: 2.8415 - val_acc: 0.3044 - val_top5-acc: 0.6006
Epoch 39/150
704/704 [==============================] - 91s 129ms/step - loss: 2.2774 - acc: 0.3991 - top5-acc: 0.7157 - val_loss: 2.7933 - val_acc: 0.3102 - val_top5-acc: 0.6062
Epoch 40/150
704/704 [==============================] - 91s 129ms/step - loss: 2.1166 - acc: 0.4405 - top5-acc: 0.7471 - val_loss: 2.7587 - val_acc: 0.3212 - val_top5-acc: 0.6200
Epoch 41/150
704/704 [==============================] - 91s 129ms/step - loss: 2.1013 - acc: 0.4441 - top5-acc: 0.7492 - val_loss: 2.7656 - val_acc: 0.3214 - val_top5-acc: 0.6184
Epoch 42/150
704/704 [==============================] - 91s 129ms/step - loss: 2.1073 - acc: 0.4423 - top5-acc: 0.7488 - val_loss: 2.7465 - val_acc: 0.3202 - val_top5-acc: 0.6200
Epoch 43/150
704/704 [==============================] - 91s 129ms/step - loss: 2.1149 - acc: 0.4409 - top5-acc: 0.7463 - val_loss: 2.7660 - val_acc: 0.3170 - val_top5-acc: 0.6136
Epoch 44/150
704/704 [==============================] - 91s 129ms/step - loss: 2.1216 - acc: 0.4349 - top5-acc: 0.7501 - val_loss: 2.7410 - val_acc: 0.3192 - val_top5-acc: 0.6214
Epoch 45/150
704/704 [==============================] - 91s 129ms/step - loss: 2.1302 - acc: 0.4352 - top5-acc: 0.7472 - val_loss: 2.7357 - val_acc: 0.3244 - val_top5-acc: 0.6212
Epoch 46/150
704/704 [==============================] - 91s 129ms/step - loss: 2.1406 - acc: 0.4339 - top5-acc: 0.7430 - val_loss: 2.7283 - val_acc: 0.3216 - val_top5-acc: 0.6214
Epoch 47/150
704/704 [==============================] - 91s 129ms/step - loss: 2.1443 - acc: 0.4326 - top5-acc: 0.7439 - val_loss: 2.7311 - val_acc: 0.3244 - val_top5-acc: 0.6220
Epoch 48/150
704/704 [==============================] - 91s 129ms/step - loss: 2.1566 - acc: 0.4285 - top5-acc: 0.7416 - val_loss: 2.7659 - val_acc: 0.3204 - val_top5-acc: 0.6190
Epoch 49/150
704/704 [==============================] - 91s 130ms/step - loss: 2.1643 - acc: 0.4299 - top5-acc: 0.7383 - val_loss: 2.7580 - val_acc: 0.3154 - val_top5-acc: 0.6192
Epoch 50/150
704/704 [==============================] - 91s 130ms/step - loss: 2.1765 - acc: 0.4252 - top5-acc: 0.7383 - val_loss: 2.7798 - val_acc: 0.3132 - val_top5-acc: 0.6126
Epoch 51/150
704/704 [==============================] - 91s 129ms/step - loss: 2.1880 - acc: 0.4220 - top5-acc: 0.7368 - val_loss: 2.7309 - val_acc: 0.3226 - val_top5-acc: 0.6168
Epoch 52/150
704/704 [==============================] - 91s 130ms/step - loss: 2.0753 - acc: 0.4498 - top5-acc: 0.7592 - val_loss: 2.7238 - val_acc: 0.3254 - val_top5-acc: 0.6250
Epoch 53/150
704/704 [==============================] - 91s 130ms/step - loss: 2.1114 - acc: 0.4432 - top5-acc: 0.7508 - val_loss: 2.6953 - val_acc: 0.3308 - val_top5-acc: 0.6294
Epoch 54/150
704/704 [==============================] - 91s 130ms/step - loss: 2.1705 - acc: 0.4300 - top5-acc: 0.7404 - val_loss: 2.8034 - val_acc: 0.3082 - val_top5-acc: 0.6034
Epoch 55/150
704/704 [==============================] - 91s 130ms/step - loss: 2.2115 - acc: 0.4185 - top5-acc: 0.7317 - val_loss: 2.7543 - val_acc: 0.3112 - val_top5-acc: 0.6210
Epoch 56/150
704/704 [==============================] - 91s 130ms/step - loss: 2.2556 - acc: 0.4079 - top5-acc: 0.7227 - val_loss: 2.7410 - val_acc: 0.3180 - val_top5-acc: 0.6176
Epoch 57/150
704/704 [==============================] - 91s 130ms/step - loss: 2.2971 - acc: 0.3980 - top5-acc: 0.7128 - val_loss: 2.8079 - val_acc: 0.3084 - val_top5-acc: 0.6072
Epoch 58/150
704/704 [==============================] - 91s 130ms/step - loss: 2.3305 - acc: 0.3919 - top5-acc: 0.7063 - val_loss: 2.7500 - val_acc: 0.3198 - val_top5-acc: 0.6176
Epoch 59/150
704/704 [==============================] - 91s 130ms/step - loss: 2.2869 - acc: 0.4039 - top5-acc: 0.7153 - val_loss: 2.7592 - val_acc: 0.3118 - val_top5-acc: 0.6122
Epoch 60/150
704/704 [==============================] - 91s 129ms/step - loss: 2.3656 - acc: 0.3840 - top5-acc: 0.6996 - val_loss: 2.8113 - val_acc: 0.2960 - val_top5-acc: 0.6040
Epoch 61/150
704/704 [==============================] - 91s 129ms/step - loss: 2.4386 - acc: 0.3705 - top5-acc: 0.6814 - val_loss: 2.7994 - val_acc: 0.3010 - val_top5-acc: 0.6052
Epoch 62/150
704/704 [==============================] - 91s 129ms/step - loss: 2.5062 - acc: 0.3521 - top5-acc: 0.6667 - val_loss: 2.8871 - val_acc: 0.2996 - val_top5-acc: 0.5852
Epoch 63/150
704/704 [==============================] - 91s 129ms/step - loss: 2.5682 - acc: 0.3387 - top5-acc: 0.6513 - val_loss: 2.9012 - val_acc: 0.2876 - val_top5-acc: 0.5860
313/313 [==============================] - 10s 23ms/step - loss: 2.6583 - acc: 0.3348 - top5-acc: 0.6294
Test accuracy: 33.48%
Test top 5 accuracy: 62.94%
|
module3/3.Assignment/3.Assignment_Solution_RegressionClassification_Module3.ipynb | ###Markdown
Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 3 AssignmentWe're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices.But not just for condos in Tribeca...Instead, predict property sales prices for **One Family Dwellings** (`BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'`) using a subset of the data where the **sale price was more than \\$100 thousand and less than $2 million.** The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.- [X] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.- [X] Do exploratory visualizations with Seaborn.- [X] Do one-hot encoding of categorical features.- [X] Do feature selection with `SelectKBest`.- [X] Fit a linear regression model with multiple features.- [X] Get mean absolute error for the test set.- [ ] As always, commit your notebook to your fork of the GitHub repo. Stretch Goals- [ ] Add your own stretch goal(s) !- [X] Try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html) instead of Linear Regression, especially if your errors blow up! Watch [Aaron Gallant's 9 minute video on Ridge Regression](https://www.youtube.com/watch?v=XK5jkedy17w) to learn more.- [X] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Learn more about feature selection: - ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance) - [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html) - [mlxtend](http://rasbt.github.io/mlxtend/) library - scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection) - [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson.- [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients.- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites).(That book is good regardless of whether your cultural worldview is inferential statistics or predictive machine learning)- [ ] Read Leo Breiman's paper, ["Statistical Modeling: The Two Cultures"](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html):> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors.
###Code
import os, sys
in_colab = 'google.colab' in sys.modules
# If you're in Colab...
if in_colab:
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
!git pull origin master
# Install required python packages
!pip install -r requirements.txt
# Change into directory for module
os.chdir('module3')
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import pandas as pd
import pandas_profiling
# Read New York City property sales data
df = pd.read_csv('../data/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df.columns = [col.replace(' ', '_') for col in df]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df['SALE_PRICE'] = (
df['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
###Output
_____no_output_____
###Markdown
Use a subset of the dataPredict **One Family Dwellings** (`BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'`) using a subset of the data where the **sale price was more than \\$100 thousand and less than $2 million.**
###Code
mask = ((df['BUILDING_CLASS_CATEGORY'] == '01 ONE FAMILY DWELLINGS') &
(df['SALE_PRICE'] > 100000) &
(df['SALE_PRICE'] < 2000000))
df = df[mask]
###Output
_____no_output_____
###Markdown
Do train/test splitUse data from January — March 2019 to train. Use data from April 2019 to test.
###Code
df['SALE_DATE'] = pd.to_datetime(df['SALE_DATE'], infer_datetime_format=True)
df['SALE_DATE'].describe()
cutoff = pd.to_datetime('2019-04-01')
train = df[df.SALE_DATE < cutoff]
test = df[df.SALE_DATE >= cutoff]
train.shape, test.shape
import pandas_profiling
train.profile_report()
###Output
_____no_output_____
###Markdown
Do exploratory visualizations with Seaborn
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
for col in sorted(train.columns):
if train[col].nunique() < 10:
try:
sns.catplot(x=col, y='SALE_PRICE', data=train, kind='bar', color='grey')
plt.show()
except:
pass
numeric = train.select_dtypes('number')
for col in sorted(numeric.columns):
sns.lmplot(x=col, y='SALE_PRICE', data=train, scatter_kws=dict(alpha=0.05))
plt.show()
train.BOROUGH.info()
###Output
_____no_output_____
###Markdown
Do one-hot encoding of categorical features
###Code
# BOROUGH is a numeric column, but arguably should be a categorical feature,
# so convert it from a number to a string
train['BOROUGH'] = train['BOROUGH'].astype(str)
test['BOROUGH'] = test['BOROUGH'].astype(str)
# Check cardinality of non-numeric features
train.describe(exclude='number').T.sort_values(by='unique')
# Reduce cardinality for NEIGHBORHOOD feature
# Get a list of the top 10 neighborhoods
top10 = train['NEIGHBORHOOD'].value_counts()[:10].index
# At locations where the neighborhood is NOT in the top 10,
# replace the neighborhood with 'OTHER'
train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
train['NEIGHBORHOOD'].value_counts()
target = 'SALE_PRICE'
numerics = train.select_dtypes(include='number').columns.drop(target).tolist()
categoricals = train.select_dtypes(exclude='number').columns.tolist()
low_cardinality_categoricals = [col for col in categoricals
if train[col].nunique() <= 50]
features = numerics + low_cardinality_categoricals
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
import category_encoders as ce
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_test_encoded = encoder.transform(X_test)
X_train_encoded.head()
###Output
_____no_output_____
###Markdown
Fit a linear regression model with multiple features. Get mean absolute error for the test set.
###Code
# Drop EASE-MENT, it's null 100% of the time
X_train_encoded = X_train_encoded.drop(columns='EASE-MENT')
X_test_encoded = X_test_encoded.drop(columns='EASE-MENT')
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_test_scaled = scaler.transform(X_test_encoded)
for k in range(1, len(X_train_encoded.columns)+1):
print(f'{k} features')
selector = SelectKBest(score_func=f_regression, k=k)
X_train_selected = selector.fit_transform(X_train_scaled, y_train)
X_test_selected = selector.transform(X_test_scaled)
model = LinearRegression()
model.fit(X_train_selected, y_train)
y_pred = model.predict(X_test_selected)
mae = mean_absolute_error(y_test, y_pred)
print(f'Test MAE: ${mae:,.0f} \n')
###Output
1 features
Test MAE: $183,641
2 features
Test MAE: $182,569
3 features
Test MAE: $182,569
4 features
Test MAE: $183,441
5 features
Test MAE: $186,532
6 features
Test MAE: $182,366
7 features
Test MAE: $194,204
8 features
Test MAE: $172,203
9 features
Test MAE: $171,721
10 features
Test MAE: $162,840
11 features
Test MAE: $163,984
12 features
Test MAE: $162,140
13 features
Test MAE: $161,428
14 features
Test MAE: $161,430
15 features
Test MAE: $161,301
16 features
Test MAE: $163,095
17 features
Test MAE: $162,964
18 features
Test MAE: $162,964
19 features
Test MAE: $162,752
20 features
Test MAE: $162,752
21 features
Test MAE: $162,560
22 features
Test MAE: $163,008
23 features
Test MAE: $163,057
24 features
Test MAE: $163,057
25 features
Test MAE: $163,057
26 features
Test MAE: $162,779
27 features
Test MAE: $20,955,941,571,174,472
28 features
Test MAE: $162,722
29 features
Test MAE: $27,788,224,381,348
30 features
Test MAE: $6,730,344,069,832,371
31 features
Test MAE: $9,791,729,318,812,754
32 features
Test MAE: $17,988,915,186,908,660
33 features
Test MAE: $23,777,119,250,355,996
34 features
Test MAE: $383,411,860,984,814
35 features
Test MAE: $162,480
36 features
Test MAE: $162,288
37 features
Test MAE: $67,860,659,198,659,568
38 features
Test MAE: $10,800,362,380,187,296
39 features
Test MAE: $355,375,768,552,818
40 features
Test MAE: $102,137,346,124,192
41 features
Test MAE: $8,715,239,514,480,743
42 features
Test MAE: $12,459,933,556,297,932
43 features
Test MAE: $806,930,518,644,283
44 features
Test MAE: $1,921,978,148,133,319
45 features
Test MAE: $712,528,228,621,818
46 features
Test MAE: $161,167
47 features
Test MAE: $22,116,745,373,162
48 features
Test MAE: $1,130,206,224,564,043
49 features
Test MAE: $6,857,552,206,587,154
50 features
Test MAE: $161,358
51 features
Test MAE: $109,580,766,038,480
52 features
Test MAE: $313,103,093,058,772
53 features
Test MAE: $5,089,114,232,864,148
54 features
Test MAE: $636,750,177,700,398
###Markdown
Try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html) instead of Linear Regression, especially if your errors blow up
###Code
from sklearn.linear_model import RidgeCV
for k in range(1, len(X_train_encoded.columns)+1):
print(f'{k} features')
selector = SelectKBest(score_func=f_regression, k=k)
X_train_selected = selector.fit_transform(X_train_scaled, y_train)
X_test_selected = selector.transform(X_test_scaled)
model = RidgeCV()
model.fit(X_train_selected, y_train)
y_pred = model.predict(X_test_selected)
mae = mean_absolute_error(y_test, y_pred)
print(f'Test MAE: ${mae:,.0f} \n')
# Which features were used?
k = 15
selector = SelectKBest(score_func=f_regression, k=k)
X_train_selected = selector.fit_transform(X_train_scaled, y_train)
all_names = X_train_encoded.columns
selected_mask = selector.get_support()
selected_names = all_names[selected_mask]
unselected_names = all_names[~selected_mask]
print('Features selected:')
for name in selected_names:
print(name)
print('\nFeatures not selected:')
for name in unselected_names:
print(name)
###Output
Features selected:
BLOCK
ZIP_CODE
COMMERCIAL_UNITS
TOTAL_UNITS
GROSS_SQUARE_FEET
BOROUGH_3
BOROUGH_2
BOROUGH_5
NEIGHBORHOOD_OTHER
NEIGHBORHOOD_BAYSIDE
NEIGHBORHOOD_FLUSHING-NORTH
BUILDING_CLASS_AT_PRESENT_A5
BUILDING_CLASS_AT_PRESENT_A3
BUILDING_CLASS_AT_TIME_OF_SALE_A5
BUILDING_CLASS_AT_TIME_OF_SALE_A3
Features not selected:
LOT
RESIDENTIAL_UNITS
YEAR_BUILT
TAX_CLASS_AT_TIME_OF_SALE
BOROUGH_4
BOROUGH_1
NEIGHBORHOOD_QUEENS VILLAGE
NEIGHBORHOOD_LAURELTON
NEIGHBORHOOD_SO. JAMAICA-BAISLEY PARK
NEIGHBORHOOD_SPRINGFIELD GARDENS
NEIGHBORHOOD_GREAT KILLS
NEIGHBORHOOD_SOUTH OZONE PARK
NEIGHBORHOOD_MIDLAND BEACH
NEIGHBORHOOD_ST. ALBANS
BUILDING_CLASS_CATEGORY_01 ONE FAMILY DWELLINGS
TAX_CLASS_AT_PRESENT_1
TAX_CLASS_AT_PRESENT_1D
BUILDING_CLASS_AT_PRESENT_A9
BUILDING_CLASS_AT_PRESENT_A1
BUILDING_CLASS_AT_PRESENT_A0
BUILDING_CLASS_AT_PRESENT_A2
BUILDING_CLASS_AT_PRESENT_S1
BUILDING_CLASS_AT_PRESENT_A4
BUILDING_CLASS_AT_PRESENT_A6
BUILDING_CLASS_AT_PRESENT_A8
BUILDING_CLASS_AT_PRESENT_B2
BUILDING_CLASS_AT_PRESENT_S0
BUILDING_CLASS_AT_PRESENT_B3
APARTMENT_NUMBER_nan
APARTMENT_NUMBER_RP.
BUILDING_CLASS_AT_TIME_OF_SALE_A9
BUILDING_CLASS_AT_TIME_OF_SALE_A1
BUILDING_CLASS_AT_TIME_OF_SALE_A0
BUILDING_CLASS_AT_TIME_OF_SALE_A2
BUILDING_CLASS_AT_TIME_OF_SALE_S1
BUILDING_CLASS_AT_TIME_OF_SALE_A4
BUILDING_CLASS_AT_TIME_OF_SALE_A6
BUILDING_CLASS_AT_TIME_OF_SALE_A8
BUILDING_CLASS_AT_TIME_OF_SALE_S0
|
week 8/Week 8 - Numerical Python (NumPy) Practice.ipynb | ###Markdown
What is NumPy?NumPy is a Python library used for working with arrays.It has functions for working in domain of linear algebra, fourier transform, and matrices.NumPy was created in 2005 by Travis Oliphant. It is an open source project and you can use it freely. Why Use NumPy?In Python we have lists that serve the purpose of arrays, but they are slow to process.NumPy aims to provide an array object that is up to 50x faster than traditional Python lists.The array object in NumPy is called ndarray, it provides a lot of supporting functions that make working with ndarray very easy. Import NumPyOnce NumPy is installed, import it in your applications by adding the import keyword:
###Code
import numpy
###Output
_____no_output_____
###Markdown
NumPy as npNumPy is usually imported under the np alias.Create an alias with the as keyword while importing:
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Checking NumPy VersionThe version string is stored under __ __version__ __ attribute.
###Code
import numpy as nk
print(nk.__version__)
###Output
1.18.5
###Markdown
Create a NumPy ndarray ObjectNumPy is used to work with arrays. The array object in NumPy is called ndarray. We can create a NumPy ndarray object by using the array() function.
###Code
import numpy as np
arr = np.array([101, 201, 301, 401, 501])
print(arr)
print(type(arr))
###Output
[101 201 301 401 501]
<class 'numpy.ndarray'>
###Markdown
To create an ndarray, we can pass a list, tuple or any array-like object into the array() method, and it will be converted into an ndarray:
###Code
import numpy as np
nameList = ['Angel', "Shemi", "Marvel", "Linda"]
ageTuple = (41, 32, 21, 19)
gradeDict = {"CSC102": 89, "MTH 102": 77, "CHM 102": 69, "GST 102": 99}
arr_nameList = np.array(nameList)
arr_ageTuple = np.array(ageTuple)
arr_gradeDict = np.array(gradeDict)
print(arr_nameList)
print(arr_ageTuple)
print(arr_gradeDict)
###Output
['Angel' 'Shemi' 'Marvel' 'Linda']
[41 32 21 19]
{'CSC102': 89, 'MTH 102': 77, 'CHM 102': 69, 'GST 102': 99}
###Markdown
Dimensions in ArrayA dimension in arrays is one level of array depth (nested arrays). 0-Dimension0-D arrays, or Scalars, are the elements in an array. Each value in an array is a 0-D array.
###Code
import numpy as np
classNum = int(input("How many students are in the CSC 102 class?"))
class_arr = np.array(classNum)
if (class_arr == 1):
print("There is only ", class_arr, "student in CSC 102 class" )
else:
print("There are", class_arr, "students in CSC 102 class" )
###Output
How many students are in the CSC 102 class?123
There are 123 students in CSC 102 class
###Markdown
1-D ArraysAn array that has 0-D arrays as its elements is called uni-dimensional or 1-D array. These are the most common and basic arrays.
###Code
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
print(arr)
###Output
[1 2 3 4 5]
###Markdown
2-D ArraysAn array that has 1-D arrays as its elements is called a 2-D array. These are often used to represent matrix or 2nd order tensors.
###Code
import numpy as np
arr = np.array([[1, 2, 3], [4, 5, 6]])
print(arr)
###Output
[[1 2 3]
[4 5 6]]
###Markdown
3-D arraysAn array that has 2-D arrays (matrices) as its elements is called 3-D array. These are often used to represent a 3rd order tensor.
###Code
import numpy as np
arr = np.array([[[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [4, 5, 6]]])
print(arr)
###Output
[[[1 2 3]
[4 5 6]]
[[1 2 3]
[4 5 6]]
[[1 2 3]
[4 5 6]]]
###Markdown
Check Number of Dimensions?NumPy Arrays provides the ndim attribute that returns an integer that tells us how many dimensions the array have
###Code
import numpy as np
a = np.array(42)
b = np.array([[[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [4, 5, 6]]])
c = np.array([[1, 2, 3], [4, 5, 6]])
d = np.array([1, 2, 3, 4, 5])
print(a.ndim)
print(b.ndim)
print(c.ndim)
print(d.ndim)
###Output
0
3
2
1
###Markdown
Higher Dimensional ArraysAn array can have any number of dimensions. When the array is created, you can define the number of dimensions by using the ndmin argument.In this array the innermost dimension (5th dim) has 4 elements, the 4th dim has 1 element that is the vector, the 3rd dim has 1 element that is the matrix with the vector, the 2nd dim has 1 element that is 3D array and 1st dim has 1 element that is a 4D array.
###Code
import numpy as np
arr = np.array([1, 2, 3, 4], ndmin=6)
print(arr)
print('number of dimensions :', arr.ndim)
###Output
[[[[[[1 2 3 4]]]]]]
number of dimensions : 6
###Markdown
Access Array Elements
###Code
import numpy as np
arr = np.array([1, 2, 3, 4])
print(arr[1])
###Output
2
###Markdown
Access 2-D Arrays
###Code
import numpy as np
arr = np.array([[1,2,3,4,5], [6,7,8,9,10]])
print('5th element on 2nd row: ', arr[1, 4])
###Output
5th element on 2nd row: 10
###Markdown
Access 3-D Arrays
###Code
import numpy as np
arr = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]])
print(arr[0, 1, 2])
###Output
6
###Markdown
Negative IndexingUse negative indexing to access an array from the end.
###Code
import numpy as np
arr = np.array([[1,2,3,4,5], [6,7,8,9,10]])
print('Last element from 2nd dim: ', arr[1, -1])
###Output
Last element from 2nd dim: 10
###Markdown
Slicing arraysSlicing in python means taking elements from one given index to another given index. We pass slice instead of index like this: [start:end]. We can also define the step, like this: [start:end:step]. If we don't pass start its considered 0 If we don't pass end its considered length of array in that dimension If we don't pass step its considered 1
###Code
# Slice elements from index 1 to index 5 from the following array:
import numpy as np
arr = np.array([1, 2, 3, 4, 5, 6, 7])
print(arr[1:5])
# Slice elements from index 4 to the end of the array:
import numpy as np
arr = np.array([1, 2, 3, 4, 5, 6, 7])
print(arr[4:])
# Slice elements from the beginning to index 4 (not included):
import numpy as np
arr = np.array([1, 2, 3, 4, 5, 6, 7])
print(arr[:4])
###Output
[1 2 3 4]
###Markdown
Checking the Data Type of an Array
###Code
import numpy as np
int_arr = np.array([1, 2, 3, 4])
str_arr = np.array(['apple', 'banana', 'cherry'])
print(int_arr.dtype)
print(str_arr.dtype)
###Output
int32
<U6
###Markdown
NumPy Array Copy vs View The Difference Between Copy and ViewThe main difference between a copy and a view of an array is that the copy is a new array, and the view is just a view of the original array. The copy owns the data and any changes made to the copy will not affect original array, and any changes made to the original array will not affect the copy. The view does not own the data and any changes made to the view will affect the original array, and any changes made to the original array will affect the view. Copy
###Code
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
x = arr.copy()
arr[0] = 42
print(arr)
print(x)
###Output
[42 2 3 4 5]
[1 2 3 4 5]
###Markdown
View
###Code
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
x = arr.view()
arr[0] = 42
print(arr)
print(x)
###Output
[42 2 3 4 5]
[42 2 3 4 5]
###Markdown
Check if Array Owns its Data
###Code
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
x = arr.copy()
y = arr.view()
print(x.base)
print(y.base)
###Output
None
[1 2 3 4 5]
###Markdown
Get the Shape of an Array
###Code
# Print the shape of a 2-D array:
import numpy as np
arr = np.array([[1, 2, 3, 4], [5, 6, 7, 8]])
print(arr.shape)
import numpy as np
arr = np.array([1, 2, 3, 4], ndmin=5)
print(arr)
print('shape of array :', arr.shape)
###Output
[[[[[1 2 3 4]]]]]
shape of array : (1, 1, 1, 1, 4)
###Markdown
Iterating Arrays
###Code
#Iterate on each scalar element of the 2-D array:
import numpy as np
arr = np.array([[1, 2, 3], [4, 5, 6]])
for x in arr:
for y in x:
print(y,x)
# Iterate on the elements of the following 3-D array:
import numpy as np
arr = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]])
for x in arr:
print(x[0][1])
print(x[1][0])
import numpy as np
arr = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]])
for x in arr:
for y in x:
for z in y:
print(z,y,x)
###Output
1 [1 2 3] [[1 2 3]
[4 5 6]]
2 [1 2 3] [[1 2 3]
[4 5 6]]
3 [1 2 3] [[1 2 3]
[4 5 6]]
4 [4 5 6] [[1 2 3]
[4 5 6]]
5 [4 5 6] [[1 2 3]
[4 5 6]]
6 [4 5 6] [[1 2 3]
[4 5 6]]
7 [7 8 9] [[ 7 8 9]
[10 11 12]]
8 [7 8 9] [[ 7 8 9]
[10 11 12]]
9 [7 8 9] [[ 7 8 9]
[10 11 12]]
10 [10 11 12] [[ 7 8 9]
[10 11 12]]
11 [10 11 12] [[ 7 8 9]
[10 11 12]]
12 [10 11 12] [[ 7 8 9]
[10 11 12]]
###Markdown
Joining NumPy ArraysWe pass a sequence of arrays that we want to join to the concatenate() function, along with the axis. If axis is not explicitly passed, it is taken as 0.
###Code
# Join two arrays
import numpy as np
arr1 = np.array([1, 2, 3])
arr2 = np.array([4, 5, 6])
arr = np.concatenate((arr1, arr2))
print(arr)
###Output
[1 2 3 4 5 6]
###Markdown
Splitting NumPy ArraysSplitting is reverse operation of Joining. Joining merges multiple arrays into one and Splitting breaks one array into multiple. We use array_split() for splitting arrays, we pass it the array we want to split and the number of splits.
###Code
import numpy as np
arr = np.array([1, 2, 3, 4, 5, 6])
newarr = np.array_split(arr, 3)
print(newarr)
# Access splitted arrays
import numpy as np
arr = np.array([1, 2, 3, 4, 5, 6])
newarr = np.array_split(arr, 3)
print(newarr[0])
print(newarr[1])
print(newarr[2])
###Output
[1 2]
[3 4]
[5 6]
###Markdown
Splitting 2-D Arrays
###Code
import numpy as np
arr = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12]])
newarr = np.array_split(arr, 3)
print(newarr)
###Output
[array([[1, 2],
[3, 4]]), array([[5, 6],
[7, 8]]), array([[ 9, 10],
[11, 12]])]
|
MaterialCursoPython/Fase 2 - Manejo de datos y optimizacion/Tema 07 - Gestion de errores/Apuntes/Leccion 3 (Apuntes) - Excepciones multiples.ipynb | ###Markdown
Capturando múltiples excepciones Guardando la excepciónPodemos asignar una excepción a una variable (por ejemplo e). De esta forma haciendo un pequeño truco podemos analizar el tipo de error que sucede gracias a su identificador:
###Code
try:
n = input("Introduce un número: ")
5/n
except Exception as e:
print( type(e).__name__ )
###Output
Introduce un número: 10
TypeError
###Markdown
Encadenando excepcionesGracias a los identificadores de errores podemos crear múltiples comprobaciones, siempre que dejemos en último lugar la excepción por defecto *Excepcion* que engloba cualquier tipo de error (si la pusiéramos al principio, las demas excepciones nunca se ejecutarían):
###Code
try:
n = float(input("Introduce un número: "))
5/n
except TypeError:
print("No se puede dividir el número por una cadena")
except ValueError:
print("Debes introducir una cadena que sea un número")
except ZeroDivisionError:
print("No se puede dividir por cero, prueba otro número")
except Exception as e:
print( type(e).__name__ )
###Output
Introduce un número: aaaa
ValueError
|
_notebooks/2020-06-21-02-Basics-of-randomness-and-simulation.ipynb | ###Markdown
Basics of randomness and simulation> This chapter gives you the tools required to run a simulation. We'll start with a review of random variables and probability distributions. We will then learn how to run a simulation by first looking at a simulation workflow and then recreating it in the context of a game of dice. Finally, we will learn how to use simulations for making decisions. This is the Summary of lecture "Statistical Simulation in Python", via datacamp.- toc: true - badges: true- comments: true- author: Chanseok Kang- categories: [Python, Datacamp, Statistics, Modeling]- image:
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['figure.figsize'] = (10, 5)
###Output
_____no_output_____
###Markdown
Introduction to random variables- Continous Random Variables - Infinitely many possible values (e.g., Height, Weights)- Discrete Random Variables - Finite set of possible values (e.g., Outcomes of a six-sided die) Poisson random variableThe `numpy.random` module also has a number of useful probability distributions for both discrete and continuous random variables. In this exercise, you will learn how to draw samples from a probability distribution.In particular, you will draw samples from a very important discrete probability distribution, the Poisson distribution, which is typically used for modeling the average rate at which events occur.Following the exercise, you should be able to apply these steps to any of the probability distributions found in `numpy.random`. In addition, you will also see how the sample mean changes as we draw more samples from a distribution.
###Code
# Initialize seed and parameters
np.random.seed(123)
lam, size_1, size_2 = 5, 3, 100
# Draw samples & calculate absolute difference between lambda and sample mean
samples_1 = np.random.poisson(lam, size_1)
samples_2 = np.random.poisson(lam, size_2)
answer_1 = abs(lam - np.mean(samples_1))
answer_2 = abs(lam - np.mean(samples_2))
print("|Lambda - sample mean| with {} samples is {} and with {} samples is {}. ".format(size_1,
answer_1,
size_2,
answer_2))
###Output
|Lambda - sample mean| with 3 samples is 0.33333333333333304 and with 100 samples is 0.11000000000000032.
###Markdown
Shuffling a deck of cardsOften times we are interested in randomizing the order of a set of items. Consider a game of cards where you first shuffle the deck of cards or a game of scrabble where the letters are first mixed in a bag. As the final exercise of this section, you will learn another useful function - `np.random.shuffle()`. This function allows you to randomly shuffle a sequence in place. At the end of this exercise, you will know how to shuffle a deck of cards or any sequence of items.
###Code
#hide
deck_of_cards = [('Heart', 0),
('Heart', 1),
('Heart', 2),
('Heart', 3),
('Heart', 4),
('Heart', 5),
('Heart', 6),
('Heart', 7),
('Heart', 8),
('Heart', 9),
('Heart', 10),
('Heart', 11),
('Heart', 12),
('Club', 0),
('Club', 1),
('Club', 2),
('Club', 3),
('Club', 4),
('Club', 5),
('Club', 6),
('Club', 7),
('Club', 8),
('Club', 9),
('Club', 10),
('Club', 11),
('Club', 12),
('Spade', 0),
('Spade', 1),
('Spade', 2),
('Spade', 3),
('Spade', 4),
('Spade', 5),
('Spade', 6),
('Spade', 7),
('Spade', 8),
('Spade', 9),
('Spade', 10),
('Spade', 11),
('Spade', 12),
('Diamond', 0),
('Diamond', 1),
('Diamond', 2),
('Diamond', 3),
('Diamond', 4),
('Diamond', 5),
('Diamond', 6),
('Diamond', 7),
('Diamond', 8),
('Diamond', 9),
('Diamond', 10),
('Diamond', 11),
('Diamond', 12)]
# Shuffle the deck
np.random.shuffle(deck_of_cards)
# Print out the top three cars
card_choices_after_shuffle = deck_of_cards[:3]
print(card_choices_after_shuffle)
###Output
[('Spade', 2), ('Heart', 9), ('Diamond', 3)]
###Markdown
Simulation basics- Simulations - Framework for modeling real-world events - Characterized by repeated random sampling - Gives us an approximate solution - Can help solve complex problems- Simulation steps 1. Define possible outcomes for random variables 2. Assign probabilities 3. Define relationships between random variables 4. Get multiple outcomes by repeated random sampling 5. Analyze sample outcomes Throwing a fair dieOnce you grasp the basics of designing a simulation, you can apply it to any system or process. Next, we will learn how each step is implemented using some basic examples.As we have learned, simulation involves repeated random sampling. The first step then is to get one random sample. Once we have that, all we do is repeat the process multiple times. This exercise will focus on understanding how we get one random sample. We will study this in the context of throwing a fair six-sided die.By the end of this exercise, you will be familiar with how to implement the first two steps of running a simulation - defining a random variable and assigning probabilities.
###Code
np.random.seed(123)
die, probabilities, throws = [1, 2, 3, 4, 5, 6], [1/6, 1/6, 1/6, 1/6, 1/6, 1/6], 1
# Use np.random.choice to throw the die once and record the outcome
outcome = np.random.choice(die, size=throws, p=probabilities)
print("Outcome of the throw: {}".format(outcome[0]))
###Output
Outcome of the throw: 5
###Markdown
Throwing two fair diceWe now know how to implement the first two steps of a simulation. Now let's implement the next step - defining the relationship between random variables.Often times, our simulation will involve not just one, but multiple random variables. Consider a game where throw you two dice and win if each die shows the same number. Here we have two random variables - the two dice - and a relationship between each of them - we win if they show the same number, lose if they don't. In reality, the relationship between random variables can be much more complex, especially when simulating things like weather patterns.By the end of this exercise, you will be familiar with how to implement the third step of running a simulation - defining relationships between random variables.
###Code
np.random.seed(223)
# Initialize number of dice, simulate & record outcome
die, probabilities, num_dice = [1,2,3,4,5,6], [1/6, 1/6, 1/6, 1/6, 1/6, 1/6], 2
outcomes = np.random.choice(die, size=num_dice, p=probabilities)
# Win if the two dice show the same number
if outcomes[0] == outcomes[1]:
answer = 'win'
else:
answer = 'lose'
print("The dice show {} and {}. You {}!".format(outcomes[0], outcomes[1], answer))
###Output
The dice show 5 and 5. You win!
###Markdown
Simulating the dice gameWe now know how to implement the first three steps of a simulation. Now let's consider the next step - repeated random sampling.Simulating an outcome once doesn't tell us much about how often we can expect to see that outcome. In the case of the dice game from the previous exercise, it's great that we won once. But suppose we want to see how many times we can expect to win if we played this game multiple times, we need to repeat the random sampling process many times. Repeating the process of random sampling is helpful to understand and visualize inherent uncertainty and deciding next steps.Following this exercise, you will be familiar with implementing the fourth step of running a simulation - sampling repeatedly and generating outcomes.
###Code
np.random.seed(223)
# Initialize model parameters & simulate dice throw
die, probabilities, num_dice = [1,2,3,4,5,6], [1/6, 1/6, 1/6, 1/6, 1/6, 1/6], 2
sims, wins = 100, 0
for i in range(sims):
outcomes = np.random.choice(die, num_dice, p=probabilities)
# Increment `wins` by 1 if the dice show same number
if outcomes[0] == outcomes[1]:
wins = wins + 1
print("In {} games, you win {} times".format(sims, wins))
###Output
In 100 games, you win 25 times
###Markdown
Using simulation for decision-making Simulating one lottery drawingIn the last three exercises of this chapter, we will be bringing together everything you've learned so far. We will run a complete simulation, take a decision based on our observed outcomes, and learn to modify inputs to the simulation model.We will use simulations to figure out whether or not we want to buy a lottery ticket. Suppose you have the opportunity to buy a lottery ticket which gives you a shot at a grand prize of \\$ 1 Million. Since there are 1000 tickets in total, your probability of winning is 1 in 1000. Each ticket costs \\$ 10. Let's use our understanding of basic simulations to first simulate one drawing of the lottery.
###Code
np.random.seed(123)
# Pre-defined constant variables
lottery_ticket_cost, num_tickets, grand_prize = 10, 1000, 1000000
# Probability of winning
chance_of_winning = 1 / num_tickets
# Simulate a single drawing of the lottery
gains = [-lottery_ticket_cost, grand_prize-lottery_ticket_cost]
probability = [1 - chance_of_winning, chance_of_winning]
outcome = np.random.choice(a=gains, size=1, p=probability, replace=True)
print("Outcome of one drawing of the lottery is {}".format(outcome))
###Output
Outcome of one drawing of the lottery is [-10]
###Markdown
Should we buy?In the last exercise, we simulated the random drawing of the lottery ticket once. In this exercise, we complete the simulation process by repeating the process multiple times.Repeating the process gives us multiple outcomes. We can think of this as multiple universes where the same lottery drawing occurred. We can then determine the average winnings across all these universes. If the average winnings are greater than what we pay for the ticket then it makes sense to buy it, otherwise, we might not want to buy the ticket.This is typically how simulations are used for evaluating business investments. After completing this exercise, you will have the basic tools required to use simulations for decision-making.
###Code
np.random.seed(123)
# Initialize size and simulate outcome
lottery_ticket_cost, num_tickets, grand_prize = 10, 1000, 1000000
chance_of_winning = 1/num_tickets
size = 2000
payoffs = [-lottery_ticket_cost, grand_prize - lottery_ticket_cost]
probs = [1 - chance_of_winning, chance_of_winning]
outcomes = np.random.choice(a=payoffs, size=size, p=probs, replace=True)
# Mean of outcomes.
answer = np.mean(outcomes)
print("Average payoff from {} simulations = {}".format(size, answer))
###Output
Average payoff from 2000 simulations = 1990.0
###Markdown
Calculating a break-even lottery priceSimulations allow us to ask more nuanced questions that might not necessarily have an easy analytical solution. Rather than solving a complex mathematical formula, we directly get multiple sample outcomes. We can run experiments by modifying inputs and studying how those changes impact the system. For example, once we have a moderately reasonable model of global weather patterns, we could evaluate the impact of increased greenhouse gas emissions.In the lottery example, we might want to know how expensive the ticket needs to be for it to not make sense to buy it. To understand this, we need to modify the ticket cost to see when the expected payoff is negative.
###Code
np.random.seed(333)
# Initialize simulations and cost of ticket
sims, lottery_ticket_cost = 3000, 0
# Use a while loop to increment `lottery_ticket_cost` till average value of outcomes falls below zero
while 1:
outcomes = np.random.choice([-lottery_ticket_cost, grand_prize-lottery_ticket_cost],
size=sims, p=[1-chance_of_winning, chance_of_winning], replace=True)
if outcomes.mean() < 0:
break
else:
lottery_ticket_cost += 1
answer = lottery_ticket_cost - 1
print("The highest price at which it makes sense to buy the ticket is {}".format(answer))
###Output
The highest price at which it makes sense to buy the ticket is 9
|
ATMS-597-SP-2020-Project-2/ATMS_597_Project_2_Rylan.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import requests
def make_request(endpoint, payload=None):
"""
Make a request to a specific endpoint on the weather API
passing headers and optional payload.
Parameters:
- endpoint: The endpoint of the API you want to
make a GET request to.
- payload: A dictionary of data to pass along
with the request.
Returns:
Response object.
"""
return requests.get(
f'https://www.ncdc.noaa.gov/cdo-web/api/v2/'+endpoint,
headers={
'token': 'yicVcIaiwUAgtBveaBtWSaioiQvqRJRh'
},
params=payload
)
# This cell will request locations. We used this to find the locationid for Champaign, IL area.
# !!!No need to run this cell again unless we want to look up a new locationid!!!
response = make_request(
'locations',
{
'datasetid' : 'GHCND',
'locationcategoryid' : 'CITY',
'datacategoryid' : 'TEMP',
'sortorder' : 'desc',
'limit' : 1000 # max allowed
}
)
response.json()
# This cell will request stations. We used this to find the stationid for Rantoul, IL station.
# !!!No need to run this cell again unless we want to look up a new stationid!!!
response = make_request(
'stations',
{
'datasetid' : 'GHCND',
'locationid' : 'CITY:US170004',
'datacategoryid' : 'TEMP',
'limit' : 1000 # max allowed
}
)
response.json()
# Create lists containing the beginning and end of years we want to loop over.
# Clunky for now, can probably make this smoother using some kind of loop to add one to the year each time
currentlist = [datetime.date(2015, 1, 1), datetime.date(2016, 1, 1), datetime.date(2017, 1, 1), datetime.date(2018, 1, 1), datetime.date(2019, 1, 1)]
endlist = [datetime.date(2015, 12, 31), datetime.date(2016, 12, 31), datetime.date(2017, 12, 31), datetime.date(2018, 12, 31),datetime.date(2019, 12, 31)]
# This cell will request the data
results = [] # get an empty list to fill with data
numloops = np.arange(len(currentlist)) # fill a numper array with the length of the list of years we want
#Start the loop over the years we want
for i in numloops:
current = currentlist[i] # set current to the beginning of the year in our loop
end = endlist[i] # set end to the end of the year in our loop
# update the cell with status information
display.clear_output(wait=True)
display.display(f'Gathering data for {str(current)}')
response = make_request(
'data',
{
'datasetid' : 'GHCND', # Global Historical Climatology Network - Daily (GHCND) dataset
'datatypeid' : 'TMAX',
'stationid' : 'GHCND:USW00014806',
'startdate' : current,
'enddate' : end,
'units' : 'metric',
'limit' : 1000 # max allowed
}
)
response.json()
results.extend(response.json()['results']) # put the data in the results list
len(results) # check the length of the results list to make sure we have the correct number of days
# Put the results in a pandas dataframe
df = pd.DataFrame(results)
df.head()
###Rylan's code for getting Yearly Average Temperature goes here. ###
###Output
_____no_output_____ |
PlantVillage/PlantVillageDataset.ipynb | ###Markdown
Installing Hub
###Code
!pip3 install hub --quiet
# Run below cells and restart the runtime
# if you are running it in colab
# import os
# os.kill(os.getpid(), 9)
###Output
_____no_output_____
###Markdown
Download raw dataset
###Code
from IPython.display import clear_output
# Download dataset here
!wget https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/tywbtsjrjv-1.zip
!unzip tywbtsjrjv-1.zip
!unzip Plant_leaf_diseases_dataset_with_augmentation.zip
!unzip Plant_leaf_diseases_dataset_without_augmentation.zip
!rm -rf *.zip
clear_output()
import os
from glob import glob
###Output
_____no_output_____
###Markdown
Creating dataset on hub **Activeloop API** : https://docs.activeloop.ai/api-basics
###Code
import hub
# Login to ActiveLoop
%env BUGGER_OFF=True
!activeloop login -u username -p password
!activeloop reporting --off
change_classes = {
'Peach___healthy' : 'Peach_healthy',
'Strawberry___Leaf_scorch' : 'Strawberry_leaf_scorch',
'Grape___Esca_(Black_Measles)' : 'Grape_black_measles',
'Tomato___Septoria_leaf_spot' : 'Tomato_septoria_leaf_spot',
'Grape___healthy' : 'Grape_healthy',
'Tomato___healthy' : 'Tomato_healthy',
'Peach___Bacterial_spot' : 'Peach_bacterial_spot',
'Corn___Cercospora_leaf_spot Gray_leaf_spot' : 'Corn_gray_leaf_spot',
'Soybean___healthy' : 'Soybean_healthy',
'Corn___Common_rust' : 'Corn_common_rust',
'Blueberry___healthy' : 'Blueberry_healthy',
'Corn___healthy' : 'Corn_healthy',
'Apple___healthy' : 'Apple_healthy',
'Apple___Cedar_apple_rust' : 'Apple_cedar_apple_rust',
'Background_without_leaves' : 'Background_without_leaves',
'Tomato___Target_Spot' : 'Tomato_target_spot',
'Pepper,_bell___healthy' : 'Pepper_healthy',
'Grape___Black_rot' : 'Grape_black_rot',
'Apple___Apple_scab' : 'Apple_scab',
'Raspberry___healthy' : 'Raspberry_healthy',
'Tomato___Early_blight' : 'Tomato_early_blight',
'Tomato___Tomato_Yellow_Leaf_Curl_Virus' : 'Tomato_yellow_leaf_curl_virus',
'Corn___Northern_Leaf_Blight' : 'Corn_northern_leaf_blight',
'Potato___healthy' : 'Potato_healthy',
'Tomato___Late_blight' : 'Tomato_late_blight',
'Cherry___Powdery_mildew' : 'Cherry_powdery_mildew',
'Grape___Leaf_blight_(Isariopsis_Leaf_Spot)' : 'Grape_leaf_blight',
'Tomato___Leaf_Mold' : 'Tomato_leaf_mold',
'Pepper,_bell___Bacterial_spot' : 'Pepper_bacterial_spot',
'Potato___Late_blight' : 'Potato_late_blight',
'Tomato___Tomato_mosaic_virus' : 'Tomato_mosaic_virus',
'Potato___Early_blight' : 'Potato_early_blight',
'Tomato___Bacterial_spot' : 'Tomato_bacterial_spot',
'Strawberry___healthy' : 'Strawberry_healthy',
'Cherry___healthy' : 'Cherry_healthy',
'Squash___Powdery_mildew' : 'Squash_powdery_mildew',
'Tomato___Spider_mites Two-spotted_spider_mite' : 'Tomato_spider_mites_two-spotted_spider_mite',
'Orange___Haunglongbing_(Citrus_greening)' : 'Orange_haunglongbing',
'Apple___Black_rot' : 'Apple_black_rot'
}
class_names = list(change_classes.values())
folders = list(change_classes.keys())
print(f'folders -> {folders}')
print(f'classes -> {class_names}')
without_augmentation = '/content/Plant_leave_diseases_dataset_without_augmentation'
with_augmentation = '/content/Plant_leave_diseases_dataset_with_augmentation'
class_names.index('Tomato_healthy')
filename_path = 'hub://<username>/plantvillage-without-augmentation'
ds = hub.dataset(filename_path)
with ds:
ds.create_tensor('images', htype='image', sample_compression='jpg')
ds.create_tensor('labels', htype='class_label', class_names = class_names)
for folder in folders:
path = os.path.join(without_augmentation, folder)
label = change_classes[folder]
label_index = class_names.index(label)
images = glob(os.path.join(path, '*.JPG'))
print(f'{folder} -> {label} -> {label_index}')
for image in images:
ds.images.append(hub.read(image))
ds.labels.append(label_index)
filename_path = 'hub://<username>/plantvillage-with-augmentation'
ds = hub.dataset(filename_path)
with ds:
ds.create_tensor('images', htype='image', sample_compression='jpg')
ds.create_tensor('labels', htype='class_label', class_names = class_names)
for folder in folders:
path = os.path.join(with_augmentation, folder)
label = change_classes[folder]
label_index = class_names.index(label)
images = glob(os.path.join(path, '*.JPG'))
print(f'{folder} -> {label} -> {label_index}')
for image in images:
ds.images.append(hub.read(image))
ds.labels.append(label_index)
###Output
Your Hub dataset has been successfully created!
The dataset is private so make sure you are logged in!
This dataset can be visualized at https://app.activeloop.ai/activeloop/plantvillage-with-augmentation.
Peach___healthy -> Peach_healthy -> 0
Strawberry___Leaf_scorch -> Strawberry_leaf_scorch -> 1
Grape___Esca_(Black_Measles) -> Grape_black_measles -> 2
Tomato___Septoria_leaf_spot -> Tomato_septoria_leaf_spot -> 3
Grape___healthy -> Grape_healthy -> 4
Tomato___healthy -> Tomato_healthy -> 5
Peach___Bacterial_spot -> Peach_bacterial_spot -> 6
Corn___Cercospora_leaf_spot Gray_leaf_spot -> Corn_gray_leaf_spot -> 7
Soybean___healthy -> Soybean_healthy -> 8
Corn___Common_rust -> Corn_common_rust -> 9
Blueberry___healthy -> Blueberry_healthy -> 10
Corn___healthy -> Corn_healthy -> 11
Apple___healthy -> Apple_healthy -> 12
Apple___Cedar_apple_rust -> Apple_cedar_apple_rust -> 13
Background_without_leaves -> Background_without_leaves -> 14
Tomato___Target_Spot -> Tomato_target_spot -> 15
Pepper,_bell___healthy -> Pepper_healthy -> 16
Grape___Black_rot -> Grape_black_rot -> 17
Apple___Apple_scab -> Apple_scab -> 18
Raspberry___healthy -> Raspberry_healthy -> 19
Tomato___Early_blight -> Tomato_early_blight -> 20
Tomato___Tomato_Yellow_Leaf_Curl_Virus -> Tomato_yellow_leaf_curl_virus -> 21
Corn___Northern_Leaf_Blight -> Corn_northern_leaf_blight -> 22
Potato___healthy -> Potato_healthy -> 23
Tomato___Late_blight -> Tomato_late_blight -> 24
Cherry___Powdery_mildew -> Cherry_powdery_mildew -> 25
Grape___Leaf_blight_(Isariopsis_Leaf_Spot) -> Grape_leaf_blight -> 26
Tomato___Leaf_Mold -> Tomato_leaf_mold -> 27
Pepper,_bell___Bacterial_spot -> Pepper_bacterial_spot -> 28
Potato___Late_blight -> Potato_late_blight -> 29
Tomato___Tomato_mosaic_virus -> Tomato_mosaic_virus -> 30
Potato___Early_blight -> Potato_early_blight -> 31
Tomato___Bacterial_spot -> Tomato_bacterial_spot -> 32
Strawberry___healthy -> Strawberry_healthy -> 33
Cherry___healthy -> Cherry_healthy -> 34
Squash___Powdery_mildew -> Squash_powdery_mildew -> 35
Tomato___Spider_mites Two-spotted_spider_mite -> Tomato_spider_mites_two-spotted_spider_mite -> 36
Orange___Haunglongbing_(Citrus_greening) -> Orange_haunglongbing -> 37
Apple___Black_rot -> Apple_black_rot -> 38
###Markdown
Testing dataset from Hub
###Code
filename_path = 'hub://<username>/plantvillage-with-augmentation'
ds = hub.dataset(filename_path)
image = ds.images[0].numpy()
label = ds.labels[0].data()
###Output
_____no_output_____ |
nbfiles/21_invader.ipynb | ###Markdown
继续挑战--- 第21题为第20题对[unreal.jpg](http://www.pythonchallenge.com/pc/hex/unreal.jpg)用特定的`Range`请求得到的压缩包内容* 压缩包里面有一个`package.pack`文件,题目`readme.txt`内容为:> * We used to play this game when we were kids> * When I had no idea what to do, I looked backwards. 先重复上一题的步骤把`package.pack`解压出来看看:
###Code
from io import BytesIO
from zipfile import ZipFile
import requests
with requests.Session() as sess:
sess.auth = ('butter', 'fly')
header = {'Range': 'bytes=1152983631-'}
response = sess.get('http://www.pythonchallenge.com/pc/hex/unreal.jpg', headers=header)
with ZipFile(BytesIO(response.content), 'r') as f:
with f.open('package.pack', 'r', pwd=b'invader'[::-1]) as f_pack:
package = f_pack.read()
print(package[:20])
###Output
b'x\x9c\x00\n@\xf5\xbfx\x9c\x00\x07@\xf8\xbfx\x9c\x00\x06@\xf9'
###Markdown
查了下`b'x\x9c\x00`开头的是`zlib`压缩格式,我们来解包看看:
###Code
import zlib
temp = zlib.decompress(package)
print(temp[:20])
###Output
b'x\x9c\x00\x07@\xf8\xbfx\x9c\x00\x06@\xf9\xbfx\x9c\x00\xff?\x00'
###Markdown
又玩这种循环迭代的游戏了!那我们继续:
###Code
import zlib
data = package
while True:
try:
data = zlib.decompress(data)
except Exception as e:
print(data[:20])
print(f'{e!r}')
break
###Output
b'BZh91AY&SY\x91\xe8/+\x00v\xa9\x7f\xff\xff'
error('Error -3 while decompressing data: incorrect header check')
###Markdown
咦?切换到了`BZh`开头的`bzip2`压缩格式了,我们改一下继续:
###Code
import bz2
import zlib
data = package
while True:
try:
data = zlib.decompress(data)
except:
try:
data = bz2.decompress(data)
except Exception as e:
print(data[:20])
print(f'{e!r}')
break
###Output
b'\x80\x8d\x96\xcb\xb5r\xa7\x00\x06Xz\xdafO\x19\xee\x84k\xa4d'
OSError('Invalid data stream')
###Markdown
这回不知道是什么东西了。。。来,我们开始读题。说这是我们小时候会玩的游戏,我们刚才是在反复解压同一个东西,估计这个游戏就像是一个**东西**在小伙伴里面不断地传递,每个人会给它用某种方式(*压缩*)**包装**一层再继续。我们在做的事情就是解压拿到最原始的内容。没毛病,但是现在我们卡壳了。再看看第二句话,当我们卡壳的时候,会试着**倒过来看**:
###Code
print(data[::-1][:20])
###Output
b'x\x9c\x00\x0c@\xf3\xbfx\x9c\x00\x05@\xfa\xbfx\x9c\x00\x05@\xfa'
###Markdown
果然有用!!我们改一下继续:
###Code
import bz2
import zlib
data = package
try_count = 0
while True:
try:
data = zlib.decompress(data)
except:
try:
data = bz2.decompress(data)
except:
data = data[::-1]
try_count += 1
if try_count == 3:
print(data[:20])
break
continue
try_count = 0
print(data.decode())
###Output
b'look at your logs'
look at your logs
###Markdown
解压出来最原好的内容了!但是叫我们看日志,看来要加上一些打印来记录我们的解压操作了。---在我们继续之前,首先是我发现了一个叫`python-magic`的库,可以知道文件内容的具体格式,不用我们总是去查找。至少可以优化一下上面那段那么丑的代码吧。
###Code
from magic import Magic
magic_t = Magic(mime=True)
print(magic_t.from_buffer(package))
###Output
application/zlib
###Markdown
其次是我们有三种不同的操作,需要定义其打印的字符:| 操作 | 打印字符 || :---: | :---: || zlib | '.' || bz2 | '0' || 倒序 | '\n' |
###Code
import bz2
import zlib
from magic import Magic
magic_t = Magic(mime=True)
data = package
while True:
mime = magic_t.from_buffer(data)
if mime in ('application/zlib', 'application/x-tex-tfm'):
data = zlib.decompress(data)
print('.', end='')
elif mime == 'application/x-bzip2':
data = bz2.decompress(data)
print('0', end='')
else:
data = data[::-1]
print()
if mime == 'text/plain':
break
print(data.decode())
###Output
......000..........000......00000000....00000000....0000000000..00000000
....0000000......0000000....000000000...000000000...000000000...000000000
...00.....00....00.....00...00......00..00......00..00..........00......00
..00...........00.......00..00......00..00......00..00..........00......00
..00...........00.......00..000000000...000000000...00000000....000000000
..00...........00.......00..00000000....00000000....00000000....00000000.
..00...........00.......00..00..........00..........00..........00...00.
...00.....00....00.....00...00..........00..........00..........00....00.
....0000000......0000000....00..........00..........000000000...00.....00.
......000..........000......00..........00..........0000000000..00......00
look at your logs
|
unit-1-build/notebooks/clean_dataset_calz.ipynb | ###Markdown
**EDA**
###Code
# inspect head @TODO DROP NAME COL
print(df.shape)
df.head()
print(df_labels.shape)
df_labels.head(20)
# define a function that will take the cell contents of num_to_rate and remove the
# paretheses
def strip_n2r(x):
return (-1)*x
test=df_labels['num_to_rate'].apply(strip_n2r)
test.head()
# make changes to df
df_labels['num_to_rate']=df_labels['num_to_rate'].apply(strip_n2r)
###Output
_____no_output_____
###Markdown
Data Cleaning/feature engineering **geo-codes engineering**
###Code
# im going to have to extract the geo code from the following links and compare them to my geo codes from
# my labels df to map the reviews to a particular store
print("labels df:")
print(df_labels['web-scraper-start-url'].iloc[0])
print("\nreviews df:")
df['web-scraper-start-url'].iloc[0]
# define a function to strip the url strings to reveal geo tags
def parse_geo(x):
return x.split('@')[1].split(',')[:-1]
# test it out before i make any changes
# note i probably need to only test on one of the df's because the
# start url is in the same format in both df's
test=df['web-scraper-start-url'].apply(parse_geo)
test.head()
# make changes
df['geo']=df['web-scraper-start-url'].apply(parse_geo)
df_labels['geo']=df_labels['web-scraper-start-url'].apply(parse_geo)
###Output
_____no_output_____
###Markdown
**rating valuation engineering**
###Code
# on the reviews it had images of lit stars triggered by js
# to work around this i took the html source because i knew it would
# show which stars are supossed to be triggered, but as a drawback
# no i have to clean HTML instead of pretty output
df['review'].head()
# make a funtion to apply to col
def clean_rating(x):
# make a list of strings to work with
out=x.split()
# sort list so that duplicates group togeather
out.sort()
# the first element is just '<span>' and the following 20 elements are redundant,
# the last 5 elements are placment flags
# remove all of them and leave me with just the number of active stars
# then get a count so i can have a pretty int
return len(out[21:len(out)-5])
# test to make sure that it works
test=df['review'].apply(clean_rating)
test.head()
# make changes to df
df['review']=df['review'].apply(clean_rating)
###Output
_____no_output_____
###Markdown
**datetime engeenering for graphing**
###Code
# humanfied dates are great to read, not so great to graph, i need to make theese
# back into dt format,(why i have a glob at the top with the date of scrape)
df['time_published'].value_counts()
# create a map for all units to hours
time_map={'second':0.000278,'minute':0.0167,'hour':1,'day':24,'week':168,'month':530.5,'year':8766}
# make a function that takes a relitive time 'a day ago' and transfers that to a datetime
def human_to_dt(x):
# the number that i will end up subtracting from scrape date
diff=0
# make a datetime object to hold the date that the dataset was scrapped
dt=datetime.datetime.strptime(DATE_SCRAPED, '%m-%d-%Y')
# strip the string down to two values a quantifier and a unit
q,u=x[:-3].strip().split(' ')
# remove trailing s on unit it is not needed
if u[len(u)-1]=="s":
u=u[:len(u)-1]
# check if there is just one unit if so then set diff=1
if q == "a":
diff=time_map[u]
# if the number is not one then multiply the q by the map_key entry for u
else:
#safe cast
try:
diff=time_map[u]*int(q)
except:
print("ERROR")
return 0
# convert dt to utc timestamp
dt=dt-datetime.timedelta(hours=diff)
timestamp = dt.replace(tzinfo=datetime.timezone.utc).timestamp()
return timestamp
# making sure that it works how i want it to
testvalue=df['time_published'].iloc[0]
print(human_to_dt(testvalue))
del testvalue
# apply changes to df
df['time_published']=df['time_published'].apply(human_to_dt)
###Output
_____no_output_____
###Markdown
more data cleaning to drop columns that i dont need anymore
###Code
df.head()
df_labels.head()
# going to drop internal scraper columns that dont add context for the information
# that is provided by them
df=df.drop(['web-scraper-order','web-scraper-start-url'],axis=1)
df_labels=df_labels.drop(['web-scraper-order','web-scraper-start-url'],axis=1)
###Output
_____no_output_____
###Markdown
merging df's based on geo codes
###Code
# but first i have to cast and round existing geo codes in both df's because
# google does such a great job at consitency
# define a function that reurns a tuple of rounded floates from the geo tags
# going with 3 signifigant figures since that equates to 110m and none of the
# stores are that close
def clean_geos(x):
out=[round(float(x[0]),3),round(float(x[1]),3)]
return out
# self explainitory
def sep_lat(x):
return x[0]
def sep_long(x):
return x[1]
test=df['geo'].iloc[0]
test_labels=df_labels['geo'].iloc[0]
print(clean_geos(test))
print(clean_geos(test_labels))
# make changes to df's
df['geo']=df['geo'].apply(clean_geos)
df_labels['geo']=df_labels['geo'].apply(clean_geos)
# add latitude and longitude columns
df['latitude']=df['geo'].apply(sep_lat)
df_labels['latitude']=df_labels['geo'].apply(sep_lat)
df['longitude']=df['geo'].apply(sep_long)
df_labels['longitude']=df_labels['geo'].apply(sep_long)
df_labels.head()
df.head()
# drop some more columns that are a pain in my ass
df=df.drop('geo',axis=1)
df_labels=df_labels.drop('geo',axis=1)
# merge data frames which will map the store the customer
test=df.merge(df_labels,on=['latitude','longitude'])
# grab a random sample to see what the dataframe looks like
test.sample(40)
###Output
_____no_output_____
###Markdown
**I think thats gonna be good for this dataset, lets do some house keeping and export it to a file to use**
###Code
print(test.columns.to_list())
# rename my columns to something easier then x_name y_name ect
df=test
perfered_names=['customer_name', 'review_rating', 'review_content','time', 'owner_response','owner_response_time','latitude',
'longitude', 'store_name', 'store_address','store_rating','num_to_rate']
df.columns=perfered_names
df.columns.to_list()
perf_order=['time',
'customer_name',
'review_content',
'review_rating',
'store_name',
'store_rating',
'store_address',
'num_to_rate',
'owner_response',
'owner_response_time',
'latitude',
'longitude']
df=df[perf_order]
df.head()
df.to_csv('calz_processed.csv')
###Output
_____no_output_____ |
courses/12. List comprehension in Python.ipynb | ###Markdown
Python: List comprehension Goals: * Interesting new functions: enumerate() and item()* Discovering the lists comprehension and its advantages* Real case: dataset on the historical members of the American Congress* Count and determine the most frequent first names The enumerate function
###Code
# Motivation
students = ["Daouda", "Moha", "Seyni", "Khadir", "Mamadou"]
ages = [16, 12, 17, 10, 15]
###Output
_____no_output_____
###Markdown
Let's display for each student his age.
###Code
for student in students:
print(student)
for age in ages:
print(age)
###Output
Daouda
Moha
Seyni
Khadir
Mamadou
16
12
17
10
15
###Markdown
We notice that by displaying the elements of the first list, we are not able to display the elements of the second list. To do this, Python's **enumerate()** function can help us do this task more easily.
###Code
# Overview
for index, student in enumerate(students):
print("Index:", index)
print("Student:", student)
###Output
Index: 0
Student: Daouda
Index: 1
Student: Moha
Index: 2
Student: Seyni
Index: 3
Student: Khadir
Index: 4
Student: Mamadou
###Markdown
Thus with the index, it is possible to retrieve the age of each student.
###Code
# Example 1
for index, student in enumerate(students):
print("Student:", student)
print("Age:", ages[index])
# Example 2
cars = [["Black", "Tesla", "Model X"], ["Grey", "Tesla", "Model S Plaid"]]
prices = [114990, 129990]
###Output
_____no_output_____
###Markdown
Let's use the **enumerate()** function to add the price to each car.
###Code
for i, car in enumerate(cars):
car.append(prices[i])
print(cars)
###Output
[['Black', 'Tesla', 'Model X', 114990], ['Grey', 'Tesla', 'Model S Plaid', 129990]]
###Markdown
List comprehension
###Code
# Motivation
animals = ["Dog", "Tiger", "Lion", "Cow", "Snake"]
animals_lenght = []
for animal in animals:
animals_lenght.append(len(animal))
print(animals_lenght)
# Use of list comprehension
animals_lenght = [len(animal) for animal in animals]
animals_lenght
# Example
prices = [10, 150, 200, 350]
prices_doubled = [price * 2 for price in prices]
prices_doubled
###Output
_____no_output_____
###Markdown
Counting female names Training
###Code
import csv
f = open("legislators.csv")
legislators = list(csv.reader(f))
for row in legislators:
birthday = row[2]
birth_year = birthday.split('-')[0]
try:
birth_year = int(birth_year)
except Exception:
birth_year = 0
row.append(birth_year)
legislators[0][7] = "birth_year"
name_counts = {}
for row in legislators:
gender = row[3]
year = row[7]
if gender == 'F' and year > 1950:
name = row[1]
if name in name_counts:
name_counts[name] += 1
else:
name_counts[name] = 1
print(name_counts)
###Output
{'Enid': 1, 'Lynn': 1, 'Karen': 1, 'Denise': 1, 'Katherine': 1, 'Melissa': 2, 'Blanche': 1, 'Cynthia': 1, 'Shelley': 2, 'Nancy': 1, 'Deborah': 2, 'Heather': 1, 'Kathleen': 2, 'Mary': 2, 'Stephanie': 1, 'Betsy': 1, 'Hilda': 1, 'Ellen': 1, 'Gabrielle': 1, 'Sandy': 1, 'Ann Marie': 1, 'Nan': 1, 'Laura': 1, 'Jean': 1, 'Betty': 1}
###Markdown
The None object
###Code
# Motivation 1
values = [2, 12, 60]
max_value = 0
for value in values:
if value > max_value:
max_value = value
print(max_value)
# Motivation 2
values = [-2, -12, -60]
max_value = 0
for value in values:
if value > max_value:
max_value = value
print(max_value)
# With None
values = [-2, -12, -60]
max_value = None
for value in values:
if max_value is None or value > max_value:
max_value = value
print(max_value)
###Output
-2
###Markdown
Training
###Code
values = [None, 1, 45, None, 75]
check_bool = [x is not None and x > 30 for x in values]
check_bool
###Output
_____no_output_____
###Markdown
Application: most frequent female names Training
###Code
max_value = None
for key in name_counts:
value = name_counts[key]
if max_value is None or value > max_value:
max_value = value
print(name_counts)
print(max_value)
###Output
2
###Markdown
The items method
###Code
# Example
fruits = {
"apple" : 12,
"banana" : 5,
"orange" : 20
}
for fruit, number in fruits.items():
print(fruit, ":", number)
###Output
apple : 12
banana : 5
orange : 20
###Markdown
Find frequent first names Training 1
###Code
top_female_names = [k for k, v in name_counts.items() if v == 2]
top_female_names
###Output
_____no_output_____
###Markdown
Training 2
###Code
top_male_names = []
male_name_counts = {}
for row in legislators:
if row[3] == "M" and row[7] > 1940:
name = row[1]
if name in male_name_counts:
male_name_counts[name] += 1
else:
male_name_counts[name] = 1
top_male_count = None
for name, count in male_name_counts.items():
if top_male_count is None or count > top_male_count:
top_male_count = count
for name, count in male_name_counts.items():
if count == top_male_count:
top_male_names.append(name)
print(top_male_names)
###Output
['John']
###Markdown
Challenge Dataset
###Code
import csv
f = open("nfl_suspensions_data.csv")
nfl_suspensions = list(csv.reader(f))
nfl_suspensions = nfl_suspensions[1:]
print(nfl_suspensions[:5])
years = {}
for suspension in nfl_suspensions:
row_year = suspension[5]
if row_year in years:
years[row_year] += 1
else:
years[row_year] = 1
print(years)
###Output
{'2014': 29, '1946': 1, '1947': 1, '2010': 21, '2008': 10, '2007': 17, '1983': 1, '2009': 10, '2005': 8, '2000': 1, '2012': 45, '2001': 3, '2006': 11, '1989': 17, ' ': 1, '1963': 1, '2013': 40, '1990': 3, '2011': 13, '2004': 6, '2002': 7, '2003': 9, '1997': 3, '1999': 5, '1993': 1, '1995': 1, '1998': 2, '1994': 1, '1986': 1}
###Markdown
Unique values
###Code
teams = [row[1] for row in nfl_suspensions]
unique_teams = set(teams)
print(unique_teams)
games = [row[2] for row in nfl_suspensions]
unique_games = set(games)
print(unique_games)
###Output
{'2', '36', 'Indef.', '10', '4', '14', '3', '16', '1', '20', '6', '8', '32', '5'}
###Markdown
Suspension class
###Code
class Suspension():
def __init__(self, row):
self.name = row[0]
self.team = row[1]
self.games = row[2]
self.year = row[5]
third_suspension = Suspension(nfl_suspensions[2])
print(third_suspension.name, "|", third_suspension.team, "|", third_suspension.games, "|", third_suspension.year)
###Output
L. Brazill | IND | Indef. | 2014
###Markdown
Improved suspension class
###Code
class Suspension():
def __init__(self, row):
self.name = row[0]
self.team = row[1]
self.games = row[2]
try:
self.year = int(row[5])
except Exception:
self.year = 0
def get_year(self):
return self.year
missing_year = Suspension(nfl_suspensions[22])
get_missing_year = missing_year.get_year()
print(get_missing_year)
###Output
0
|
component-clustering/duplicate_component_exploration.ipynb | ###Markdown
Examining Volunteer internal consistency
###Code
%load_ext autoreload
%autoreload 2
import json
import os
import re
import numpy as np
import pandas as pd
import lib.galaxy_utilities as gu
from functools import partial
from gzbuilderspirals.oo import Arm
import matplotlib.pyplot as plt
dr8ids, ss_ids, validation_ids = np.load('lib/duplicate_galaxies.npy').T
print('Defining helper functions')
def get_annotations(sid):
return gu.classifications.query(
'subject_ids == {}'.format(sid)
)['annotations'].apply(json.loads)
def n_drawn_comps(a, task=0):
try:
return len(a[task]['value'][0]['value'])
except IndexError:
return np.nan
def get_details(ann0, ann1, task=0):
n_drawn0 = ann0.apply(partial(n_drawn_comps, task=task))
n_drawn1 = ann1.apply(partial(n_drawn_comps, task=task))
return sum(((s.mean(), s.std()) for s in (n_drawn0, n_drawn1)), ())
def get_disk_details(ann0, ann1):
return get_details(ann0, ann1, task=0)
def get_bulge_details(ann0, ann1):
return get_details(ann0, ann1, task=1)
def get_bar_details(ann0, ann1):
return get_details(ann0, ann1, task=2)
def get_spiral_arm_details(ann0, ann1):
return get_details(ann0, ann1, task=3)
print('Constructing classification details Data Frame')
out = []
columns = [
'{}-{}-{}'.format(s, k, v)
for k in ('disk', 'bulge', 'bar', 'spiral_arms')
for s in ('original', 'validation')
for v in ('mean', 'std')
]
for i in range(len(dr8ids)):
id_details = {
'original_id': ss_ids[i],
'validation_id': validation_ids[i],
'dr8id': dr8ids[i],
}
details = np.array([
get_details(
get_annotations(ss_ids[i]),
get_annotations(validation_ids[i]),
task=j
)
for j in range(4)
])
freq_details = {k: v for k, v in zip(columns, details.reshape(-1))}
out.append({**id_details, **freq_details})
df = pd.DataFrame(out)
###Output
Constructing classification details Data Frame
###Markdown
How in-agreement were our volunteers? These plots show the variance in the percentage of volunteers drawing a component for galaxies in our original and validation subsets. The spiral arm plot shows the mean number of spiral arms for each galaxy.
###Code
fig, (ax_disk, ax_bulge, ax_bar, ax_spiral) = plt.subplots(ncols=4, figsize=(19, 5))
ax_disk.plot(df['original-disk-mean'], df['validation-disk-mean'], '.', c='C0')
ax_bulge.plot(df['original-bulge-mean'], df['validation-bulge-mean'], '.', c='C1')
ax_bar.plot(df['original-bar-mean'], df['validation-bar-mean'], '.', c='C2')
ax_spiral.plot(df['original-spiral_arms-mean'], df['validation-spiral_arms-mean'], '.', c='C3')
ax_disk.set_title('Fraction of classifications with Disk')
ax_bulge.set_title('Fraction of classifications with Bulge')
ax_bar.set_title('Fraction of classifications with Bar')
for ax in (ax_disk, ax_bulge, ax_bar):
ax.set_xlabel('Original set')
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
ax_disk.set_ylabel('Validation set')
ax_spiral.set_title('Mean number of spiral arms drawn')
ax_spiral.set_xlabel('Original set')
plt.savefig('duplicates_plots/component_frequency.pdf', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
The aggregate modelHow consistent is our aggregated model? We explore the consistency with which a component appears in our aggregated model, and how frequently we obtain a consistent number of spiral arms.
###Code
gzb_models = pd.read_pickle('galaxy-builder-aggregate-models.pickle')
original_models = gzb_models.loc[ss_ids]
validation_models = gzb_models.loc[validation_ids]
disk_agree = ~np.logical_xor(
original_models['disk-axRatio'].notna(),
validation_models['disk-axRatio'].notna()
)
bulge_agree = ~np.logical_xor(
original_models['bulge-axRatio'].notna(),
validation_models['bulge-axRatio'].notna()
)
bar_agree = ~np.logical_xor(
original_models['bar-axRatio'].notna(),
validation_models['bar-axRatio'].notna()
)
print('Disk agrees {:.3%} of the time'.format(disk_agree.sum() / len(disk_agree)))
print('Bulge agrees {:.3%} of the time'.format(bulge_agree.sum() / len(disk_agree)))
print('Bar agrees {:.3%} of the time'.format(bar_agree.sum() / len(disk_agree)))
print('Total model agrees {:.3%} of the time'.format(
(disk_agree & bulge_agree & bar_agree).sum() / len(disk_agree)
))
def get_n_spirals_in_model(sid):
return len([
f for f in os.listdir('lib/spiral_arms')
if re.match(r'{}-[0-9]+\.pickle'.format(sid), f)
])
n_spirals_original = np.fromiter(map(get_n_spirals_in_model, ss_ids), dtype=int)
n_spirals_validation = np.fromiter(map(get_n_spirals_in_model, validation_ids), dtype=int)
print('N_spirals agree {:03.2%} of the time'.format(
sum(np.abs(n_spirals_original - n_spirals_validation) < 1) / len(n_spirals_validation)
))
print('N_spirals within 1 {:03.2%} of the time'.format(
sum(np.abs(n_spirals_original - n_spirals_validation) < 2) / len(n_spirals_validation)
))
###Output
N_spirals agree 68.37% of the time
N_spirals within 1 90.82% of the time
###Markdown
And what of morphology? How consistent are the isophotes for our aggregated shapes?
###Code
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(15, 10))
ax_disk, ax_bulge, ax_bar = np.array(axes).T
# Disk
ax = ax_disk
ax[0].plot(
gzb_models.loc[ss_ids]['disk-axRatio'],
gzb_models.loc[validation_ids]['disk-axRatio'],
'.', c='C0',
)
ax[1].plot(
gzb_models.loc[ss_ids]['disk-rEff'],
gzb_models.loc[validation_ids]['disk-rEff'],
'.', c='C0',
)
ax[0].set_title('Disk ellipticity')
ax[1].set_title('Disk size')
for a in ax:
a.set_ylabel('Validation subset');
ax[1].set_xlabel('Original subset')
# Bulge
ax = ax_bulge
ax[0].plot(
gzb_models.loc[ss_ids]['bulge-axRatio'],
gzb_models.loc[validation_ids]['bulge-axRatio'],
'.', c='C1',
)
ax[1].plot(
gzb_models.loc[ss_ids]['bulge-rEff'],
gzb_models.loc[validation_ids]['bulge-rEff'],
'.', c='C1',
)
ax[0].set_title('Bulge ellipticity')
ax[1].set_title('Bulge size')
ax[1].set_xlabel('Original subset')
# Bar
ax = ax_bar
ax[0].plot(
gzb_models.loc[ss_ids]['bar-axRatio'],
gzb_models.loc[validation_ids]['bar-axRatio'],
'.', c='C2',
)
ax[1].plot(
gzb_models.loc[ss_ids]['bar-rEff'],
gzb_models.loc[validation_ids]['bar-rEff'],
'.', c='C2',
)
ax[0].set_title('Bar ellipticity')
ax[1].set_title('Bar size')
ax[1].set_xlabel('Original subset')
for ax in (ax_disk, ax_bulge, ax_bar):
for a in ax:
l = a.get_xlim() + a.get_ylim()
lims = min(l), max(l)
a.plot((-1e3, 1e3), (-1e3, 1e3), 'k', alpha=0.2, linewidth=1)
a.set_xlim(lims); a.set_ylim(lims)
plt.savefig('duplicates_plots/component_sizing.pdf', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
And spiral arm pitch angles?
###Code
def get_pa(sid):
arms = [
Arm.load(os.path.join('lib/spiral_arms', f))
for f in os.listdir('lib/spiral_arms')
if re.match(r'{}-[0-9]+\.pickle'.format(sid), f)
]
if not len(arms) > 0:
return np.nan, np.nan
p = arms[0].get_parent()
return p.get_pitch_angle(arms) + (len(arms),)
pa_original = pd.DataFrame(
list(map(get_pa, ss_ids)),
columns=('pa', 'sigma_pa', 'n_arms'),
index=dr8ids,
)
pa_validation = pd.DataFrame(
list(map(get_pa, validation_ids)),
columns=('pa', 'sigma_pa', 'n_arms'),
index=dr8ids,
)
mask = pa_original['n_arms'] == pa_validation['n_arms']
plt.figure(figsize=(5, 5))
mask = pa_original['n_arms'] == pa_validation['n_arms']
plt.errorbar(
pa_original[mask].iloc[:, 0],
pa_validation[mask].iloc[:, 0],
xerr=pa_original[mask].iloc[:, 1],
yerr=pa_validation[mask].iloc[:, 1],
fmt='g.'
)
plt.errorbar(
pa_original[~mask].iloc[:, 0],
pa_validation[~mask].iloc[:, 0],
xerr=pa_original[~mask].iloc[:, 1],
yerr=pa_validation[~mask].iloc[:, 1],
fmt='r.'
)
l = plt.xlim() + plt.ylim()
lims = min(l), max(l)
plt.plot((-90, 90), (-90, 90), 'k', alpha=0.2, linewidth=1)
plt.xlim(lims); plt.ylim(lims)
plt.xlabel('Pitch angle, original subset [degrees]')
plt.ylabel('Pitch angle, validation subset [degrees]')
plt.savefig('duplicates_plots/pa_comparison.pdf', bbox_inches='tight')
foo = pa_original.query('pa < 5').index
pa_original.loc[foo], pa_validation.loc[foo]
ss_ids[dr8ids == 587741600952615088], validation_ids[dr8ids == 587741600952615088]
###Output
_____no_output_____ |
notebooks/NumPy/Intermediate NumPy.ipynb | ###Markdown
Intermediate NumPyUnidata Python Workshop Overview:* **Teaching:** 15 minutes* **Exercises:** 20 minutes Questions1. How do we work with the multiple dimensions in a NumPy Array?1. How can we extract irregular subsets of data?1. How can we sort an array? Objectives1. Using axes to slice arrays1. Index arrays using true and false1. Index arrays using arrays of indices 1. Using axes to slice arraysThe solution to the last exercise in the Numpy Basics notebook introduces an important concept when working with NumPy: the axis. This indicates the particular dimension along which a function should operate (provided the function does something taking multiple values and converts to a single value). Let's look at a concrete example with `sum`:
###Code
# Convention for import to get shortened namespace
import numpy as np
# Create an array for testing
a = np.arange(12).reshape(3, 4)
a
# This calculates the total of all values in the array
np.sum(a)
# Keep this in mind:
a.shape
# Instead, take the sum across the rows:
np.sum(a, axis=0)
# Or do the same and take the some across columns:
np.sum(a, axis=1)
###Output
_____no_output_____
###Markdown
EXERCISE: Finish the code below to calculate advection. The trick is to figure out how to do the summation.
###Code
# Synthetic data
temp = np.random.randn(100, 50)
u = np.random.randn(100, 50)
v = np.random.randn(100, 50)
# Calculate the gradient components
gradx, grady = np.gradient(temp)
# Turn into an array of vectors:
# axis 0 is x position
# axis 1 is y position
# axis 2 is the vector components
grad_vec = np.dstack([gradx, grady])
print(grad_vec.shape)
# Turn wind components into vector
wind_vec = np.dstack([u, v])
# Calculate advection, the dot product of wind and the negative of gradient
# DON'T USE NUMPY.DOT (doesn't work). Multiply and add.
# %load solutions/advection.py
###Output
_____no_output_____
###Markdown
Top 2. Indexing Arrays with Boolean ValuesNumpy can easily create arrays of boolean values and use those to select certain values to extract from an array
###Code
# Create some synthetic data representing temperature and wind speed data
np.random.seed(19990503) # Make sure we all have the same data
temp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) +
50 + 2 * np.random.randn(100))
spd = (np.abs(10 * np.sin(np.linspace(0, 2 * np.pi, 100)) +
10 + 5 * np.random.randn(100)))
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(temp, 'tab:red')
plt.plot(spd, 'tab:blue');
###Output
_____no_output_____
###Markdown
By doing a comparision between a NumPy array and a value, we get anarray of values representing the results of the comparison betweeneach element and the value
###Code
temp > 45
###Output
_____no_output_____
###Markdown
We can take the resulting array and use this to index into theNumPy array and retrieve the values where the result was true
###Code
print(temp[temp > 45])
###Output
_____no_output_____
###Markdown
So long as the size of the boolean array matches the data, the boolean array can come from anywhere
###Code
print(temp[spd > 10])
# Make a copy so we don't modify the original data
temp2 = temp.copy()
# Replace all places where spd is <10 with NaN (not a number) so matplotlib skips it
temp2[spd < 10] = np.nan
plt.plot(temp2, 'tab:red')
###Output
_____no_output_____
###Markdown
Can also combine multiple boolean arrays using the syntax for bitwise operations. **MUST HAVE PARENTHESES** due to operator precedence.
###Code
print(temp[(temp < 45) & (spd > 10)])
###Output
_____no_output_____
###Markdown
EXERCISE: Heat index is only defined for temperatures >= 80F and relative humidity values >= 40%. Using the data generated below, use boolean indexing to extract the data where heat index has a valid value.
###Code
# Here's the "data"
np.random.seed(19990503) # Make sure we all have the same data
temp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) +
80 + 2 * np.random.randn(100))
rh = (np.abs(20 * np.cos(np.linspace(0, 4 * np.pi, 100)) +
50 + 5 * np.random.randn(100)))
# Create a mask for the two conditions described above
# good_heat_index =
# Use this mask to grab the temperature and relative humidity values that together
# will give good heat index values
# temp[] ?
# BONUS POINTS: Plot only the data where heat index is defined by
# inverting the mask (using `~mask`) and setting invalid values to np.nan
# %load solutions/heat_index.py
###Output
_____no_output_____
###Markdown
Top 3. Indexing using arrays of indicesYou can also use a list or array of indices to extract particular values--this is a natural extension of the regular indexing. For instance, just as we can select the first element:
###Code
print(temp[0])
###Output
_____no_output_____
###Markdown
We can also extract the first, fifth, and tenth elements:
###Code
print(temp[[0, 4, 9]])
###Output
_____no_output_____
###Markdown
One of the ways this comes into play is trying to sort numpy arrays using `argsort`. This function returns the indices of the array that give the items in sorted order. So for our temp "data":
###Code
inds = np.argsort(temp)
print(inds)
###Output
_____no_output_____
###Markdown
We can use this array of indices to pass into temp to get it in sorted order:
###Code
print(temp[inds])
###Output
_____no_output_____
###Markdown
Or we can slice `inds` to only give the 10 highest temperatures:
###Code
ten_highest = inds[-10:]
print(temp[ten_highest])
###Output
_____no_output_____
###Markdown
There are other numpy arg functions that return indices for operating:
###Code
np.*arg*?
###Output
_____no_output_____
###Markdown
Intermediate NumPyUnidata Python Workshop Overview:* **Teaching:** 15 minutes* **Exercises:** 20 minutes Questions1. How do we work with the multiple dimensions in a NumPy Array?1. How can we extract irregular subsets of data?1. How can we sort an array? Objectives1. Using axes to slice arrays1. Index arrays using true and false1. Index arrays using arrays of indices 1. Using axes to slice arraysThe solution to the last exercise in the Numpy Basics notebook introduces an important concept when working with NumPy: the axis. This indicates the particular dimension along which a function should operate (provided the function does something taking multiple values and converts to a single value). Let's look at a concrete example with `sum`:
###Code
# Convention for import to get shortened namespace
import numpy as np
# Create an array for testing
a = np.arange(12).reshape(3, 4)
a
# This calculates the total of all values in the array
np.sum(a)
# Keep this in mind:
a.shape
# Instead, take the sum across the rows:
np.sum(a, axis=0)
# Or do the same and take the some across columns:
np.sum(a, axis=1)
###Output
_____no_output_____
###Markdown
EXERCISE: Finish the code below to calculate advection. The trick is to figure out how to do the summation.
###Code
# Synthetic data
temp = np.random.randn(100, 50)
u = np.random.randn(100, 50)
v = np.random.randn(100, 50)
# Calculate the gradient components
gradx, grady = np.gradient(temp)
# Turn into an array of vectors:
# axis 0 is x position
# axis 1 is y position
# axis 2 is the vector components
grad_vec = np.dstack([gradx, grady])
print(grad_vec.shape)
# Turn wind components into vector
wind_vec = np.dstack([u, v])
# Calculate advection, the dot product of wind and the negative of gradient
# DON'T USE NUMPY.DOT (doesn't work). Multiply and add.
# %load solutions/advection.py
###Output
_____no_output_____
###Markdown
Top 2. Indexing Arrays with Boolean ValuesNumpy can easily create arrays of boolean values and use those to select certain values to extract from an array
###Code
# Create some synthetic data representing temperature and wind speed data
np.random.seed(19990503) # Make sure we all have the same data
temp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) +
50 + 2 * np.random.randn(100))
spd = (np.abs(10 * np.sin(np.linspace(0, 2 * np.pi, 100)) +
10 + 5 * np.random.randn(100)))
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(temp, 'tab:red')
plt.plot(spd, 'tab:blue');
###Output
_____no_output_____
###Markdown
By doing a comparision between a NumPy array and a value, we get anarray of values representing the results of the comparison betweeneach element and the value
###Code
temp > 45
###Output
_____no_output_____
###Markdown
We can take the resulting array and use this to index into theNumPy array and retrieve the values where the result was true
###Code
print(temp[temp > 45])
###Output
_____no_output_____
###Markdown
So long as the size of the boolean array matches the data, the boolean array can come from anywhere
###Code
print(temp[spd > 10])
# Make a copy so we don't modify the original data
temp2 = temp.copy()
# Replace all places where spd is <10 with NaN (not a number) so matplotlib skips it
temp2[spd < 10] = np.nan
plt.plot(temp2, 'tab:red')
###Output
_____no_output_____
###Markdown
Can also combine multiple boolean arrays using the syntax for bitwise operations. **MUST HAVE PARENTHESES** due to operator precedence.
###Code
print(temp[(temp < 45) & (spd > 10)])
###Output
_____no_output_____
###Markdown
EXERCISE: Heat index is only defined for temperatures >= 80F and relative humidity values >= 40%. Using the data generated below, use boolean indexing to extract the data where heat index has a valid value.
###Code
# Here's the "data"
np.random.seed(19990503) # Make sure we all have the same data
temp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) +
80 + 2 * np.random.randn(100))
rh = (np.abs(20 * np.cos(np.linspace(0, 4 * np.pi, 100)) +
50 + 5 * np.random.randn(100)))
# Create a mask for the two conditions described above
# good_heat_index =
# Use this mask to grab the temperature and relative humidity values that together
# will give good heat index values
# temp[] ?
# BONUS POINTS: Plot only the data where heat index is defined by
# inverting the mask (using `~mask`) and setting invalid values to np.nan
# %load solutions/heat_index.py
###Output
_____no_output_____
###Markdown
Top 3. Indexing using arrays of indicesYou can also use a list or array of indices to extract particular values--this is a natural extension of the regular indexing. For instance, just as we can select the first element:
###Code
print(temp[0])
###Output
_____no_output_____
###Markdown
We can also extract the first, fifth, and tenth elements:
###Code
print(temp[[0, 4, 9]])
###Output
_____no_output_____
###Markdown
One of the ways this comes into play is trying to sort numpy arrays using `argsort`. This function returns the indices of the array that give the items in sorted order. So for our temp "data":
###Code
inds = np.argsort(temp)
print(inds)
###Output
_____no_output_____
###Markdown
We can use this array of indices to pass into temp to get it in sorted order:
###Code
print(temp[inds])
###Output
_____no_output_____
###Markdown
Or we can slice `inds` to only give the 10 highest temperatures:
###Code
ten_highest = inds[-10:]
print(temp[ten_highest])
###Output
_____no_output_____
###Markdown
There are other numpy arg functions that return indices for operating:
###Code
np.*arg*?
###Output
_____no_output_____
###Markdown
Intermediate NumPyUnidata Python Workshop Overview:* **Teaching:** 20 minutes* **Exercises:** 25 minutes Questions1. How do we work with the multiple dimensions in a NumPy Array?1. How can we extract irregular subsets of data?1. How can we sort an array? Objectives1. Index and slice arrays1. Index arrays using true and false1. Index arrays using arrays of indices 1. Index and slice arraysIndexing is how we pull individual data items out of an array. Slicing extends this process to pulling out a regular set of the items.
###Code
# Convention for import to get shortened namespace
import numpy as np
# Create an array for testing
a = np.arange(12).reshape(3, 4)
a
###Output
_____no_output_____
###Markdown
Indexing in Python is 0-based, so the command below looks for the 2nd item along the first dimension (row) and the 3rd along the second dimension (column).
###Code
a[1, 2]
###Output
_____no_output_____
###Markdown
Can also just index on one dimension
###Code
a[2]
###Output
_____no_output_____
###Markdown
Negative indices are also allowed, which permit indexing relative to the end of the array.
###Code
a[0, -1]
###Output
_____no_output_____
###Markdown
Slicing syntax is written as `start:stop[:step]`, where all numbers are optional.- defaults: - start = 0 - end = len(dim) - step = 1- The second colon is also optional if no step is used.It should be noted that end represents one past the last item; one can also think of it as a half open interval: `[start, end)`
###Code
# Get the 2nd and 3rd rows
a[1:3]
# All rows and 3rd column
a[:, 2]
# ... can be used to replace one or more full slices
a[..., 2]
# Slice every other row
a[::2]
# You can also slice using negative indices
a[:, :-1]
###Output
_____no_output_____
###Markdown
EXERCISE: The code below calculates a two point average using a Python list and loop. Convert it do obtain the same results using NumPy slicing Bonus points: Can you extend the NumPy version to do a 3 point (running) average?
###Code
data = [1, 3, 5, 7, 9, 11]
out = []
# Look carefully at the loop. Think carefully about the sequence of values
# that data[i] takes--is there some way to get those values as a numpy slice?
# What about for data[i + 1]?
for i in range(len(data) - 1):
out.append((data[i] + data[i + 1]) / 2)
print(out)
###Output
_____no_output_____
###Markdown
View Solutiondata = np.array([1, 3, 5, 7, 9, 11])out = (data[:-1] + data[1:]) / 2print(out) View Bonus Solutiondata = np.array([1, 3, 5, 7, 9, 11])out = (data[2:] + data[1:-1] + data[:-2]) / 3print(out) EXERCISE: Given the array of data below, calculate the total of each of the columns (i.e. add each of the three rows together):
###Code
data = np.arange(12).reshape(3, 4)
# total = ?
###Output
_____no_output_____
###Markdown
View Solutionprint(data[0] + data[1] + data[2]) Or we can use numpy's sum and use the "axis" argumentprint(np.sum(data, axis=0)) The solution to the last exercise introduces an important concept when working with NumPy: the axis. This indicates the particular dimension along which a function should operate (provided the function does something taking multiple values and converts to a single value). Let's look at a concrete example with `sum`:
###Code
a
# This calculates the total of all values in the array
np.sum(a)
# Keep this in mind:
a.shape
# Instead, take the sum across the rows:
np.sum(a, axis=0)
# Or do the same and take the some across columns:
np.sum(a, axis=1)
###Output
_____no_output_____
###Markdown
EXERCISE: Finish the code below to calculate advection. The trick is to figure out how to do the summation.
###Code
# Synthetic data
temp = np.random.randn(100, 50)
u = np.random.randn(100, 50)
v = np.random.randn(100, 50)
# Calculate the gradient components
gradx, grady = np.gradient(temp)
# Turn into an array of vectors:
# axis 0 is x position
# axis 1 is y position
# axis 2 is the vector components
grad_vec = np.dstack([gradx, grady])
print(grad_vec.shape)
# Turn wind components into vector
wind_vec = np.dstack([u, v])
# Calculate advection, the dot product of wind and the negative of gradient
# DON'T USE NUMPY.DOT (doesn't work). Multiply and add.
###Output
_____no_output_____
###Markdown
View Solutionadvec = (wind_vec * -grad_vec).sum(axis=-1)print(advec.shape) Top 2. Indexing Arrays with Boolean ValuesNumpy can easily create arrays of boolean values and use those to select certain values to extract from an array
###Code
# Create some synthetic data representing temperature and wind speed data
np.random.seed(19990503) # Make sure we all have the same data
temp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) +
50 + 2 * np.random.randn(100))
spd = (np.abs(10 * np.sin(np.linspace(0, 2 * np.pi, 100)) +
10 + 5 * np.random.randn(100)))
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(temp, 'tab:red')
plt.plot(spd, 'tab:blue');
###Output
_____no_output_____
###Markdown
By doing a comparision between a NumPy array and a value, we get anarray of values representing the results of the comparison betweeneach element and the value
###Code
temp > 45
###Output
_____no_output_____
###Markdown
We can take the resulting array and use this to index into theNumPy array and retrieve the values where the result was true
###Code
print(temp[temp > 45])
###Output
_____no_output_____
###Markdown
So long as the size of the boolean array matches the data, the boolean array can come from anywhere
###Code
print(temp[spd > 10])
# Make a copy so we don't modify the original data
temp2 = temp.copy()
# Replace all places where spd is <10 with NaN (not a number) so matplotlib skips it
temp2[spd < 10] = np.nan
plt.plot(temp2, 'tab:red')
###Output
_____no_output_____
###Markdown
Can also combine multiple boolean arrays using the syntax for bitwise operations. **MUST HAVE PARENTHESES** due to operator precedence.
###Code
print(temp[(temp < 45) & (spd > 10)])
###Output
_____no_output_____
###Markdown
EXERCISE: Heat index is only defined for temperatures >= 80F and relative humidity values >= 40%. Using the data generated below, use boolean indexing to extract the data where heat index has a valid value.
###Code
import numpy as np
# Here's the "data"
np.random.seed(19990503) # Make sure we all have the same data
temp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) +
80 + 2 * np.random.randn(100))
rh = (np.abs(20 * np.cos(np.linspace(0, 4 * np.pi, 100)) +
50 + 5 * np.random.randn(100)))
# Create a mask for the two conditions described above
# good_heat_index =
# Use this mask to grab the temperature and relative humidity values that together
# will give good heat index values
# temp[] ?
# BONUS POINTS: Plot only the data where heat index is defined by
# inverting the mask (using `~mask`) and setting invalid values to np.nan
###Output
_____no_output_____
###Markdown
View Solutionimport numpy as np Here's the "data"np.random.seed(19990503) Make sure we all have the same datatemp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) + 80 + 2 * np.random.randn(100))rh = (np.abs(20 * np.cos(np.linspace(0, 4 * np.pi, 100)) + 50 + 5 * np.random.randn(100))) Create a mask for the two conditions described abovegood_heat_index = (temp >= 80) & (rh >= 0.4) Use this mask to grab the temperature and relative humidity values that together will give good heat index valuesprint(temp[good_heat_index]) BONUS POINTS: Plot only the data where heat index is defined by inverting the mask (using `~mask`) and setting invalid values to np.nantemp[~good_heat_index] = np.nanplt.plot(temp, 'tab:red') Top 3. Indexing using arrays of indicesYou can also use a list or array of indices to extract particular values--this is a natural extension of the regular indexing. For instance, just as we can select the first element:
###Code
print(temp[0])
###Output
_____no_output_____
###Markdown
We can also extract the first, fifth, and tenth elements:
###Code
print(temp[[0, 4, 9]])
###Output
_____no_output_____
###Markdown
One of the ways this comes into play is trying to sort numpy arrays using `argsort`. This function returns the indices of the array that give the items in sorted order. So for our temp "data":
###Code
inds = np.argsort(temp)
print(inds)
###Output
_____no_output_____
###Markdown
We can use this array of indices to pass into temp to get it in sorted order:
###Code
print(temp[inds])
###Output
_____no_output_____
###Markdown
Or we can slice `inds` to only give the 10 highest temperatures:
###Code
ten_highest = inds[-10:]
print(temp[ten_highest])
###Output
_____no_output_____
###Markdown
There are other numpy arg functions that return indices for operating:
###Code
np.*arg*?
###Output
_____no_output_____
###Markdown
Intermediate NumPyUnidata Python Workshop Overview:* **Teaching:** 20 minutes* **Exercises:** 25 minutes Questions1. How do we work with the multiple dimensions in a NumPy Array?1. How can we extract irregular subsets of data?1. How can we sort an array? Objectives1. Index and slice arrays1. Index arrays using true and false1. Index arrays using arrays of indices 1. Index and slice arraysIndexing is how we pull individual data items out of an array. Slicing extends this process to pulling out a regular set of the items.
###Code
# Convention for import to get shortened namespace
import numpy as np
# Create an array for testing
a = np.arange(12).reshape(3, 4)
a
###Output
_____no_output_____
###Markdown
Indexing in Python is 0-based, so the command below looks for the 2nd item along the first dimension (row) and the 3rd along the second dimension (column).
###Code
a[1, 2]
###Output
_____no_output_____
###Markdown
Can also just index on one dimension
###Code
a[2]
###Output
_____no_output_____
###Markdown
Negative indices are also allowed, which permit indexing relative to the end of the array.
###Code
a[0, -1]
###Output
_____no_output_____
###Markdown
Slicing syntax is written as `start:stop[:step]`, where all numbers are optional.- defaults: - start = 0 - end = len(dim) - step = 1- The second colon is also optional if no step is used.It should be noted that end represents one past the last item; one can also think of it as a half open interval: `[start, end)`
###Code
# Get the 2nd and 3rd rows
a[1:3]
# All rows and 3rd column
a[:, 2]
# ... can be used to replace one or more full slices
a[..., 2]
# Slice every other row
a[::2]
# You can also slice using negative indices
a[:, :-1]
###Output
_____no_output_____
###Markdown
EXERCISE: The code below calculates a two point average using a Python list and loop. Convert it do obtain the same results using NumPy slicing Bonus points: Can you extend the NumPy version to do a 3 point (running) average?
###Code
data = [1, 3, 5, 7, 9, 11]
out = []
# Look carefully at the loop. Think carefully about the sequence of values
# that data[i] takes--is there some way to get those values as a numpy slice?
# What about for data[i + 1]?
for i in range(len(data) - 1):
out.append((data[i] + data[i + 1]) / 2)
print(out)
# %load solutions/average.py
# %load solutions/average3.py
###Output
_____no_output_____
###Markdown
EXERCISE: Given the array of data below, calculate the total of each of the columns (i.e. add each of the three rows together):
###Code
data = np.arange(12).reshape(3, 4)
# total = ?
# %load solutions/column_sums.py
###Output
_____no_output_____
###Markdown
The solution to the last exercise introduces an important concept when working with NumPy: the axis. This indicates the particular dimension along which a function should operate (provided the function does something taking multiple values and converts to a single value). Let's look at a concrete example with `sum`:
###Code
a
# This calculates the total of all values in the array
np.sum(a)
# Keep this in mind:
a.shape
# Instead, take the sum across the rows:
np.sum(a, axis=0)
# Or do the same and take the some across columns:
np.sum(a, axis=1)
###Output
_____no_output_____
###Markdown
EXERCISE: Finish the code below to calculate advection. The trick is to figure out how to do the summation.
###Code
# Synthetic data
temp = np.random.randn(100, 50)
u = np.random.randn(100, 50)
v = np.random.randn(100, 50)
# Calculate the gradient components
gradx, grady = np.gradient(temp)
# Turn into an array of vectors:
# axis 0 is x position
# axis 1 is y position
# axis 2 is the vector components
grad_vec = np.dstack([gradx, grady])
print(grad_vec.shape)
# Turn wind components into vector
wind_vec = np.dstack([u, v])
# Calculate advection, the dot product of wind and the negative of gradient
# DON'T USE NUMPY.DOT (doesn't work). Multiply and add.
# %load solutions/advection.py
###Output
_____no_output_____
###Markdown
Top 2. Indexing Arrays with Boolean ValuesNumpy can easily create arrays of boolean values and use those to select certain values to extract from an array
###Code
# Create some synthetic data representing temperature and wind speed data
np.random.seed(19990503) # Make sure we all have the same data
temp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) +
50 + 2 * np.random.randn(100))
spd = (np.abs(10 * np.sin(np.linspace(0, 2 * np.pi, 100)) +
10 + 5 * np.random.randn(100)))
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(temp, 'tab:red')
plt.plot(spd, 'tab:blue');
###Output
_____no_output_____
###Markdown
By doing a comparision between a NumPy array and a value, we get anarray of values representing the results of the comparison betweeneach element and the value
###Code
temp > 45
###Output
_____no_output_____
###Markdown
We can take the resulting array and use this to index into theNumPy array and retrieve the values where the result was true
###Code
print(temp[temp > 45])
###Output
_____no_output_____
###Markdown
So long as the size of the boolean array matches the data, the boolean array can come from anywhere
###Code
print(temp[spd > 10])
# Make a copy so we don't modify the original data
temp2 = temp.copy()
# Replace all places where spd is <10 with NaN (not a number) so matplotlib skips it
temp2[spd < 10] = np.nan
plt.plot(temp2, 'tab:red')
###Output
_____no_output_____
###Markdown
Can also combine multiple boolean arrays using the syntax for bitwise operations. **MUST HAVE PARENTHESES** due to operator precedence.
###Code
print(temp[(temp < 45) & (spd > 10)])
###Output
_____no_output_____
###Markdown
EXERCISE: Heat index is only defined for temperatures >= 80F and relative humidity values >= 40%. Using the data generated below, use boolean indexing to extract the data where heat index has a valid value.
###Code
import numpy as np
# Here's the "data"
np.random.seed(19990503) # Make sure we all have the same data
temp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) +
80 + 2 * np.random.randn(100))
rh = (np.abs(20 * np.cos(np.linspace(0, 4 * np.pi, 100)) +
50 + 5 * np.random.randn(100)))
# Create a mask for the two conditions described above
# good_heat_index =
# Use this mask to grab the temperature and relative humidity values that together
# will give good heat index values
# temp[] ?
# BONUS POINTS: Plot only the data where heat index is defined by
# inverting the mask (using `~mask`) and setting invalid values to np.nan
# %load solutions/heat_index.py
###Output
_____no_output_____
###Markdown
Top 3. Indexing using arrays of indicesYou can also use a list or array of indices to extract particular values--this is a natural extension of the regular indexing. For instance, just as we can select the first element:
###Code
print(temp[0])
###Output
_____no_output_____
###Markdown
We can also extract the first, fifth, and tenth elements:
###Code
print(temp[[0, 4, 9]])
###Output
_____no_output_____
###Markdown
One of the ways this comes into play is trying to sort numpy arrays using `argsort`. This function returns the indices of the array that give the items in sorted order. So for our temp "data":
###Code
inds = np.argsort(temp)
print(inds)
###Output
_____no_output_____
###Markdown
We can use this array of indices to pass into temp to get it in sorted order:
###Code
print(temp[inds])
###Output
_____no_output_____
###Markdown
Or we can slice `inds` to only give the 10 highest temperatures:
###Code
ten_highest = inds[-10:]
print(temp[ten_highest])
###Output
_____no_output_____
###Markdown
There are other numpy arg functions that return indices for operating:
###Code
np.*arg*?
###Output
_____no_output_____
###Markdown
Intermediate NumPyUnidata Python Workshop Overview:* **Teaching:** 20 minutes* **Exercises:** 25 minutes Questions1. How do we work with the multiple dimensions in a NumPy Array?1. How can we extract irregular subsets of data?1. How can we sort an array? Objectives1. Index and slice arrays1. Index arrays using true and false1. Index arrays using arrays of indices 1. Index and slice arraysIndexing is how we pull individual data items out of an array. Slicing extends this process to pulling out a regular set of the items.
###Code
# Convention for import to get shortened namespace
import numpy as np
# Create an array for testing
a = np.arange(12).reshape(3, 4)
a
###Output
_____no_output_____
###Markdown
Indexing in Python is 0-based, so the command below looks for the 2nd item along the first dimension (row) and the 3rd along the second dimension (column).
###Code
a[1, 2]
###Output
_____no_output_____
###Markdown
Can also just index on one dimension
###Code
a[2]
###Output
_____no_output_____
###Markdown
Negative indices are also allowed, which permit indexing relative to the end of the array.
###Code
a[0, -1]
###Output
_____no_output_____
###Markdown
Slicing syntax is written as `start:stop[:step]`, where all numbers are optional.- defaults: - start = 0 - end = len(dim) - step = 1- The second colon is also optional if no step is used.It should be noted that end represents one past the last item; one can also think of it as a half open interval: `[start, end)`
###Code
# Get the 2nd and 3rd rows
a[1:3]
# All rows and 3rd column
a[:, 2]
# ... can be used to replace one or more full slices
a[..., 2]
# Slice every other row
a[::2]
# You can also slice using negative indices
a[:, :-1]
###Output
_____no_output_____
###Markdown
EXERCISE: The code below calculates a two point average using a Python list and loop. Convert it do obtain the same results using NumPy slicing Bonus points: Can you extend the NumPy version to do a 3 point (running) average?
###Code
data = [1, 3, 5, 7, 9, 11]
out = []
# Look carefully at the loop. Think carefully about the sequence of values
# that data[i] takes--is there some way to get those values as a numpy slice?
# What about for data[i + 1]?
for i in range(len(data) - 1):
out.append((data[i] + data[i + 1]) / 2)
print(out)
###Output
_____no_output_____
###Markdown
View Solutiondata = np.array([1, 3, 5, 7, 9, 11])out = (data[:-1] + data[1:]) / 2print(out) View Bonus Solutiondata = np.array([1, 3, 5, 7, 9, 11])out = (data[2:] + data[1:-1] + data[:-2]) / 3print(out) EXERCISE: Given the array of data below, calculate the total of each of the columns (i.e. add each of the three rows together):
###Code
data = np.arange(12).reshape(3, 4)
# total = ?
###Output
_____no_output_____
###Markdown
View Solutionprint(data[0] + data[1] + data[2])\ Or we can use numpy's sum and use the "axis" argumentprint(np.sum(data, axis=0)) The solution to the last exercise introduces an important concept when working with NumPy: the axis. This indicates the particular dimension along which a function should operate (provided the function does something taking multiple values and converts to a single value). Let's look at a concrete example with `sum`:
###Code
a
# This calculates the total of all values in the array
np.sum(a)
# Keep this in mind:
a.shape
# Instead, take the sum across the rows:
np.sum(a, axis=0)
# Or do the same and take the some across columns:
np.sum(a, axis=1)
###Output
_____no_output_____
###Markdown
EXERCISE: Finish the code below to calculate advection. The trick is to figure out how to do the summation.
###Code
# Synthetic data
temp = np.random.randn(100, 50)
u = np.random.randn(100, 50)
v = np.random.randn(100, 50)
# Calculate the gradient components
gradx, grady = np.gradient(temp)
# Turn into an array of vectors:
# axis 0 is x position
# axis 1 is y position
# axis 2 is the vector components
grad_vec = np.dstack([gradx, grady])
print(grad_vec.shape)
# Turn wind components into vector
wind_vec = np.dstack([u, v])
# Calculate advection, the dot product of wind and the negative of gradient
# DON'T USE NUMPY.DOT (doesn't work). Multiply and add.
###Output
_____no_output_____
###Markdown
View Solutionadvec = (wind_vec * -grad_vec).sum(axis=-1)print(advec.shape) Top 2. Indexing Arrays with Boolean ValuesNumpy can easily create arrays of boolean values and use those to select certain values to extract from an array
###Code
# Create some synthetic data representing temperature and wind speed data
np.random.seed(19990503) # Make sure we all have the same data
temp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) +
50 + 2 * np.random.randn(100))
spd = (np.abs(10 * np.sin(np.linspace(0, 2 * np.pi, 100)) +
10 + 5 * np.random.randn(100)))
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(temp, 'tab:red')
plt.plot(spd, 'tab:blue');
###Output
_____no_output_____
###Markdown
By doing a comparision between a NumPy array and a value, we get anarray of values representing the results of the comparison betweeneach element and the value
###Code
temp > 45
###Output
_____no_output_____
###Markdown
We can take the resulting array and use this to index into theNumPy array and retrieve the values where the result was true
###Code
print(temp[temp > 45])
###Output
_____no_output_____
###Markdown
So long as the size of the boolean array matches the data, the boolean array can come from anywhere
###Code
print(temp[spd > 10])
# Make a copy so we don't modify the original data
temp2 = temp.copy()
# Replace all places where spd is <10 with NaN (not a number) so matplotlib skips it
temp2[spd < 10] = np.nan
plt.plot(temp2, 'tab:red')
###Output
_____no_output_____
###Markdown
Can also combine multiple boolean arrays using the syntax for bitwise operations. **MUST HAVE PARENTHESES** due to operator precedence.
###Code
print(temp[(temp < 45) & (spd > 10)])
###Output
_____no_output_____
###Markdown
EXERCISE: Heat index is only defined for temperatures >= 80F and relative humidity values >= 40%. Using the data generated below, use boolean indexing to extract the data where heat index has a valid value.
###Code
import numpy as np
# Here's the "data"
np.random.seed(19990503) # Make sure we all have the same data
temp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) +
80 + 2 * np.random.randn(100))
rh = (np.abs(20 * np.cos(np.linspace(0, 4 * np.pi, 100)) +
50 + 5 * np.random.randn(100)))
# Create a mask for the two conditions described above
# good_heat_index =
# Use this mask to grab the temperature and relative humidity values that together
# will give good heat index values
# temp[] ?
# BONUS POINTS: Plot only the data where heat index is defined by
# inverting the mask (using `~mask`) and setting invalid values to np.nan
###Output
_____no_output_____
###Markdown
View Solutionimport numpy as np\ Here's the "data"np.random.seed(19990503) Make sure we all have the same datatemp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) + 80 + 2 * np.random.randn(100))rh = (np.abs(20 * np.cos(np.linspace(0, 4 * np.pi, 100)) + 50 + 5 * np.random.randn(100)))\ Create a mask for the two conditions described abovegood_heat_index = (temp >= 80) & (rh >= 0.4)\ Use this mask to grab the temperature and relative humidity values that together\ will give good heat index valuesprint(temp[good_heat_index]) \ BONUS POINTS: Plot only the data where heat index is defined by\ inverting the mask (using `~mask`) and setting invalid values to np.nantemp[~good_heat_index] = np.nanplt.plot(temp, 'tab:red') Top 3. Indexing using arrays of indicesYou can also use a list or array of indices to extract particular values--this is a natural extension of the regular indexing. For instance, just as we can select the first element:
###Code
print(temp[0])
###Output
_____no_output_____
###Markdown
We can also extract the first, fifth, and tenth elements:
###Code
print(temp[[0, 4, 9]])
###Output
_____no_output_____
###Markdown
One of the ways this comes into play is trying to sort numpy arrays using `argsort`. This function returns the indices of the array that give the items in sorted order. So for our temp "data":
###Code
inds = np.argsort(temp)
print(inds)
###Output
_____no_output_____
###Markdown
We can use this array of indices to pass into temp to get it in sorted order:
###Code
print(temp[inds])
###Output
_____no_output_____
###Markdown
Or we can slice `inds` to only give the 10 highest temperatures:
###Code
ten_highest = inds[-10:]
print(temp[ten_highest])
###Output
_____no_output_____
###Markdown
There are other numpy arg functions that return indices for operating:
###Code
np.*arg*?
###Output
_____no_output_____ |
intermediate_notebooks/benchmarks/cugraph_benchmarks/louvain_benchmark.ipynb | ###Markdown
Louvain Performance BenchmarkingThis notebook benchmarks performance improvement of running the Louvain clustering algorithm within cuGraph against NetworkX. The test is run over eight test networks (graphs) and then results plotted. Notebook Credits Original Authors: Bradley Rees Last Edit: 08/06/2019 Test Environment RAPIDS Versions: 0.9.0 Test Hardware: GV100 32G, CUDA 10,0 Intel(R) Core(TM) CPU i7-7800X @ 3.50GHz 32GB system memory Updates- moved loading ploting libraries to front so that dependencies can be checked before running algorithms- added edge values - changed timing to including Graph creation for both cuGraph and NetworkX. This will better represent end-to-end times Dependencies- RAPIDS cuDF and cuGraph version 0.6.0 - NetworkX - Matplotlib - Scipy - data prep script run Note: Comparison against published resultsThe cuGraph blog post included performance numbers that were collected over a year ago. For the test graphs, int32 values are now used. That improves GPUs performance. Additionally, the initial benchamrks were measured on a P100 GPU. This test only comparse the modularity scores and a success is if the scores are within 15% of each other. That comparison is done by adjusting the NetworkX modularity score and then verifying that the cuGraph score is higher.cuGraph did a full validation of NetworkX results against cuGraph results. That included cross-validation of every cluster. That test is very slow and not included here
###Code
# Import needed libraries
import time
import cugraph
import cudf
import os
# NetworkX libraries
try:
import community
except ModuleNotFoundError:
os.system('pip install python-louvain')
import community
import networkx as nx
from scipy.io import mmread
# Loading plotting libraries
import matplotlib.pyplot as plt; plt.rcdefaults()
import numpy as np
import matplotlib.pyplot as plt
!bash dataPrep.sh
###Output
mkdir: cannot create directory 'data': File exists
--2019-11-01 20:49:03-- https://sparse.tamu.edu/MM/DIMACS10/preferentialAttachment.tar.gz
Resolving sparse.tamu.edu (sparse.tamu.edu)... 128.194.136.136
Connecting to sparse.tamu.edu (sparse.tamu.edu)|128.194.136.136|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2027782 (1.9M) [application/x-gzip]
Saving to: 'preferentialAttachment.tar.gz'
preferentialAttachm 100%[===================>] 1.93M 3.48MB/s in 0.6s
2019-11-01 20:49:04 (3.48 MB/s) - 'preferentialAttachment.tar.gz' saved [2027782/2027782]
--2019-11-01 20:49:04-- https://sparse.tamu.edu/MM/DIMACS10/caidaRouterLevel.tar.gz
Resolving sparse.tamu.edu (sparse.tamu.edu)... 128.194.136.136
Connecting to sparse.tamu.edu (sparse.tamu.edu)|128.194.136.136|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2418742 (2.3M) [application/x-gzip]
Saving to: 'caidaRouterLevel.tar.gz'
caidaRouterLevel.ta 100%[===================>] 2.31M 3.76MB/s in 0.6s
2019-11-01 20:49:05 (3.76 MB/s) - 'caidaRouterLevel.tar.gz' saved [2418742/2418742]
--2019-11-01 20:49:05-- https://sparse.tamu.edu/MM/DIMACS10/coAuthorsDBLP.tar.gz
Resolving sparse.tamu.edu (sparse.tamu.edu)... 128.194.136.136
Connecting to sparse.tamu.edu (sparse.tamu.edu)|128.194.136.136|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3206075 (3.1M) [application/x-gzip]
Saving to: 'coAuthorsDBLP.tar.gz'
coAuthorsDBLP.tar.g 100%[===================>] 3.06M 3.99MB/s in 0.8s
2019-11-01 20:49:06 (3.99 MB/s) - 'coAuthorsDBLP.tar.gz' saved [3206075/3206075]
--2019-11-01 20:49:06-- https://sparse.tamu.edu/MM/LAW/dblp-2010.tar.gz
Resolving sparse.tamu.edu (sparse.tamu.edu)... 128.194.136.136
Connecting to sparse.tamu.edu (sparse.tamu.edu)|128.194.136.136|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2235407 (2.1M) [application/x-gzip]
Saving to: 'dblp-2010.tar.gz'
dblp-2010.tar.gz 100%[===================>] 2.13M 3.75MB/s in 0.6s
2019-11-01 20:49:07 (3.75 MB/s) - 'dblp-2010.tar.gz' saved [2235407/2235407]
--2019-11-01 20:49:07-- https://sparse.tamu.edu/MM/DIMACS10/citationCiteseer.tar.gz
Resolving sparse.tamu.edu (sparse.tamu.edu)... 128.194.136.136
Connecting to sparse.tamu.edu (sparse.tamu.edu)|128.194.136.136|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5082095 (4.8M) [application/x-gzip]
Saving to: 'citationCiteseer.tar.gz'
citationCiteseer.ta 100%[===================>] 4.85M 4.23MB/s in 1.1s
2019-11-01 20:49:08 (4.23 MB/s) - 'citationCiteseer.tar.gz' saved [5082095/5082095]
--2019-11-01 20:49:08-- https://sparse.tamu.edu/MM/DIMACS10/coPapersDBLP.tar.gz
Resolving sparse.tamu.edu (sparse.tamu.edu)... 128.194.136.136
Connecting to sparse.tamu.edu (sparse.tamu.edu)|128.194.136.136|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 36298718 (35M) [application/x-gzip]
Saving to: 'coPapersDBLP.tar.gz'
coPapersDBLP.tar.gz 100%[===================>] 34.62M 4.93MB/s in 7.2s
2019-11-01 20:49:16 (4.79 MB/s) - 'coPapersDBLP.tar.gz' saved [36298718/36298718]
--2019-11-01 20:49:16-- https://sparse.tamu.edu/MM/DIMACS10/coPapersCiteseer.tar.gz
Resolving sparse.tamu.edu (sparse.tamu.edu)... 128.194.136.136
Connecting to sparse.tamu.edu (sparse.tamu.edu)|128.194.136.136|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 36652888 (35M) [application/x-gzip]
Saving to: 'coPapersCiteseer.tar.gz'
coPapersCiteseer.ta 100%[===================>] 34.95M 4.93MB/s in 7.2s
2019-11-01 20:49:23 (4.82 MB/s) - 'coPapersCiteseer.tar.gz' saved [36652888/36652888]
--2019-11-01 20:49:23-- https://sparse.tamu.edu/MM/SNAP/as-Skitter.tar.gz
Resolving sparse.tamu.edu (sparse.tamu.edu)... 128.194.136.136
Connecting to sparse.tamu.edu (sparse.tamu.edu)|128.194.136.136|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 33172905 (32M) [application/x-gzip]
Saving to: 'as-Skitter.tar.gz'
as-Skitter.tar.gz 100%[===================>] 31.64M 4.92MB/s in 6.6s
2019-11-01 20:49:30 (4.79 MB/s) - 'as-Skitter.tar.gz' saved [33172905/33172905]
preferentialAttachment/preferentialAttachment.mtx
caidaRouterLevel/caidaRouterLevel.mtx
coAuthorsDBLP/coAuthorsDBLP.mtx
dblp-2010/dblp-2010.mtx
citationCiteseer/citationCiteseer.mtx
coPapersDBLP/coPapersDBLP.mtx
coPapersCiteseer/coPapersCiteseer.mtx
as-Skitter/as-Skitter.mtx
find: paths must precede expression: caidaRouterLevel.mtx
Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec|time] [path...] [expression]
###Markdown
Define the test data
###Code
# Test File
data = {
'preferentialAttachment' : './data/preferentialAttachment.mtx',
'caidaRouterLevel' : './data/caidaRouterLevel.mtx',
'coAuthorsDBLP' : './data/coAuthorsDBLP.mtx',
'dblp' : './data/dblp-2010.mtx',
'citationCiteseer' : './data/citationCiteseer.mtx',
'coPapersDBLP' : './data/coPapersDBLP.mtx',
'coPapersCiteseer' : './data/coPapersCiteseer.mtx',
'as-Skitter' : './data/as-Skitter.mtx'
}
###Output
_____no_output_____
###Markdown
Define the testing functions
###Code
# Read in a dataset in MTX format
def read_mtx_file(mm_file):
print('Reading ' + str(mm_file) + '...')
d = mmread(mm_file).asfptype()
M = d.tocsr()
if M is None:
raise TypeError('Could not read the input graph')
if M.shape[0] != M.shape[1]:
raise TypeError('Shape is not square')
return M
# Run the cuGraph Louvain analytic (using nvGRAPH function)
def cugraph_call(M):
t1 = time.time()
# data
row_offsets = cudf.Series(M.indptr)
col_indices = cudf.Series(M.indices)
data = cudf.Series(M.data)
# create graph
G = cugraph.Graph()
G.add_adj_list(row_offsets, col_indices, data)
# cugraph Louvain Call
print(' cuGraph Solving... ')
df, mod = cugraph.louvain(G)
t2 = time.time() - t1
return t2, mod
# Run the NetworkX Louvain analytic. THis is done in two parts since the modularity score is not returned
def networkx_call(M):
t1 = time.time()
# Directed NetworkX graph
Gnx = nx.Graph(M)
# Networkx
print(' NetworkX Solving... ')
parts = community.best_partition(Gnx)
# Calculating modularity scores for comparison
mod = community.modularity(parts, Gnx)
t2 = time.time() - t1
return t2, mod
###Output
_____no_output_____
###Markdown
Run the benchmarks
###Code
# Loop through each test file and compute the speedup
perf = []
names = []
for k,v in data.items():
M = read_mtx_file(v)
tr, modc = cugraph_call(M)
tn, modx = networkx_call(M)
speedUp = (tn / tr)
names.append(k)
perf.append(speedUp)
mod_delta = (0.85 * modx)
print(str(speedUp) + "x faster => cugraph " + str(tr) + " vs " + str(tn))
print("Modularity => cugraph " + str(modc) + " should be greater than " + str(mod_delta))
###Output
Reading ./data/preferentialAttachment.mtx...
cuGraph Solving...
NetworkX Solving...
3509.4500202625027x faster => cugraph 0.8648371696472168 vs 3035.1028225421906
Modularity => cugraph 0.19461682219817675 should be greater than 0.21973558127621454
Reading ./data/caidaRouterLevel.mtx...
cuGraph Solving...
NetworkX Solving...
7076.7607431556x faster => cugraph 0.04834103584289551 vs 342.0979447364807
Modularity => cugraph 0.7872923202092253 should be greater than 0.7289947349239256
Reading ./data/coAuthorsDBLP.mtx...
cuGraph Solving...
NetworkX Solving...
11893.139026724633x faster => cugraph 0.06750750541687012 vs 802.8761472702026
Modularity => cugraph 0.7648739273488195 should be greater than 0.7026254024456955
Reading ./data/dblp-2010.mtx...
cuGraph Solving...
NetworkX Solving...
12969.744546806074x faster => cugraph 0.07826042175292969 vs 1015.0176782608032
Modularity => cugraph 0.7506256512679915 should be greater than 0.7450002914515801
Reading ./data/citationCiteseer.mtx...
cuGraph Solving...
NetworkX Solving...
16875.667838933237x faster => cugraph 0.07159066200256348 vs 1208.1402323246002
Modularity => cugraph 0.6726575224227932 should be greater than 0.6845554405196591
Reading ./data/coPapersDBLP.mtx...
cuGraph Solving...
NetworkX Solving...
###Markdown
plot the output
###Code
%matplotlib inline
y_pos = np.arange(len(names))
plt.bar(y_pos, perf, align='center', alpha=0.5)
plt.xticks(y_pos, names)
plt.ylabel('Speed Up')
plt.title('Performance Speedup: cuGraph vs NetworkX')
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
Louvain Performance BenchmarkingThis notebook benchmarks performance improvement of running the Louvain clustering algorithm within cuGraph against NetworkX. The test is run over eight test networks (graphs) and then results plotted. Notebook Credits Original Authors: Bradley Rees Last Edit: 08/06/2019 Test Environment RAPIDS Versions: 0.9.0 Test Hardware: GV100 32G, CUDA 10,0 Intel(R) Core(TM) CPU i7-7800X @ 3.50GHz 32GB system memory Updates- moved loading ploting libraries to front so that dependencies can be checked before running algorithms- added edge values - changed timing to including Graph creation for both cuGraph and NetworkX. This will better represent end-to-end times Dependencies- RAPIDS cuDF and cuGraph version 0.6.0 - NetworkX - Matplotlib - Scipy - data prep script run Note: Comparison against published resultsThe cuGraph blog post included performance numbers that were collected over a year ago. For the test graphs, int32 values are now used. That improves GPUs performance. Additionally, the initial benchamrks were measured on a P100 GPU. This test only comparse the modularity scores and a success is if the scores are within 15% of each other. That comparison is done by adjusting the NetworkX modularity score and then verifying that the cuGraph score is higher.cuGraph did a full validation of NetworkX results against cuGraph results. That included cross-validation of every cluster. That test is very slow and not included here
###Code
# Import needed libraries
import time
import cugraph
import cudf
# NetworkX libraries
import community
import networkx as nx
from scipy.io import mmread
# Loading plotting libraries
import matplotlib.pyplot as plt; plt.rcdefaults()
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Define the test data
###Code
# Test File
data = {
'preferentialAttachment' : './data/preferentialAttachment.mtx',
'caidaRouterLevel' : './data/caidaRouterLevel.mtx',
'coAuthorsDBLP' : './data/coAuthorsDBLP.mtx',
'dblp' : './data/dblp-2010.mtx',
'citationCiteseer' : './data/citationCiteseer.mtx',
'coPapersDBLP' : './data/coPapersDBLP.mtx',
'coPapersCiteseer' : './data/coPapersCiteseer.mtx',
'as-Skitter' : './data/as-Skitter.mtx'
}
###Output
_____no_output_____
###Markdown
Define the testing functions
###Code
# Read in a dataset in MTX format
def read_mtx_file(mm_file):
print('Reading ' + str(mm_file) + '...')
d = mmread(mm_file).asfptype()
M = d.tocsr()
if M is None:
raise TypeError('Could not read the input graph')
if M.shape[0] != M.shape[1]:
raise TypeError('Shape is not square')
return M
# Run the cuGraph Louvain analytic (using nvGRAPH function)
def cugraph_call(M):
t1 = time.time()
# data
row_offsets = cudf.Series(M.indptr)
col_indices = cudf.Series(M.indices)
data = cudf.Series(M.data)
# create graph
G = cugraph.Graph()
G.add_adj_list(row_offsets, col_indices, data)
# cugraph Louvain Call
print(' cuGraph Solving... ')
df, mod = cugraph.louvain(G)
t2 = time.time() - t1
return t2, mod
# Run the NetworkX Louvain analytic. THis is done in two parts since the modularity score is not returned
def networkx_call(M):
t1 = time.time()
# Directed NetworkX graph
Gnx = nx.Graph(M)
# Networkx
print(' NetworkX Solving... ')
parts = community.best_partition(Gnx)
# Calculating modularity scores for comparison
mod = community.modularity(parts, Gnx)
t2 = time.time() - t1
return t2, mod
###Output
_____no_output_____
###Markdown
Run the benchmarks
###Code
# Loop through each test file and compute the speedup
perf = []
names = []
for k,v in data.items():
M = read_mtx_file(v)
tr, modc = cugraph_call(M)
tn, modx = networkx_call(M)
speedUp = (tn / tr)
names.append(k)
perf.append(speedUp)
mod_delta = (0.85 * modx)
print(str(speedUp) + "x faster => cugraph " + str(tr) + " vs " + str(tn))
print("Modularity => cugraph " + str(modc) + " should be greater than " + str(mod_delta))
###Output
Reading ./data/preferentialAttachment.mtx...
cuGraph Solving...
NetworkX Solving...
1793.7776019147623x faster => cugraph 1.5131187438964844 vs 2714.198511838913
Modularity => cugraph 0.19461682219817675 should be greater than 0.2311266525308378
Reading ./data/caidaRouterLevel.mtx...
cuGraph Solving...
NetworkX Solving...
4924.058903541453x faster => cugraph 0.06783390045166016 vs 334.0181214809418
Modularity => cugraph 0.7872923202092253 should be greater than 0.7343989484495378
Reading ./data/coAuthorsDBLP.mtx...
cuGraph Solving...
NetworkX Solving...
10092.197501839399x faster => cugraph 0.06966948509216309 vs 703.1182034015656
Modularity => cugraph 0.7648739273488195 should be greater than 0.7009634341960012
Reading ./data/dblp-2010.mtx...
cuGraph Solving...
NetworkX Solving...
8046.712017220588x faster => cugraph 0.08462047576904297 vs 680.9165992736816
Modularity => cugraph 0.7506256512679915 should be greater than 0.7443468993795386
Reading ./data/citationCiteseer.mtx...
cuGraph Solving...
NetworkX Solving...
14291.698679682855x faster => cugraph 0.08543086051940918 vs 1220.9521164894104
Modularity => cugraph 0.6726575224227932 should be greater than 0.6839370382870856
Reading ./data/coPapersDBLP.mtx...
cuGraph Solving...
NetworkX Solving...
6898.94562364113x faster => cugraph 0.26548314094543457 vs 1831.553753376007
Modularity => cugraph 0.7286893741920047 should be greater than 0.7312262365408457
Reading ./data/coPapersCiteseer.mtx...
cuGraph Solving...
NetworkX Solving...
6244.639026072336x faster => cugraph 0.26204490661621094 vs 1636.3758504390717
Modularity => cugraph 0.8398191858860514 should be greater than 0.7812069006518058
Reading ./data/as-Skitter.mtx...
cuGraph Solving...
NetworkX Solving...
14900.09553131095x faster => cugraph 0.33395862579345703 vs 4976.015427827835
Modularity => cugraph 0.7690203783842553 should be greater than 0.7119040255319047
###Markdown
plot the output
###Code
%matplotlib inline
y_pos = np.arange(len(names))
plt.bar(y_pos, perf, align='center', alpha=0.5)
plt.xticks(y_pos, names)
plt.ylabel('Speed Up')
plt.title('Performance Speedup: cuGraph vs NetworkX')
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____ |
class_4-1/592B-F19-class_4-1.ipynb | ###Markdown
592B, Class 4.1 (09/24). Aliasing and the Sampling theorem
###Code
import numpy as np
import matplotlib.pyplot as plt
import scipy.io.wavfile as wavfile
import scipy.signal as signal
import librosa
from ipywidgets import interactive
from IPython.display import Audio, display
###Output
_____no_output_____
###Markdown
AliasingConsider the following function:$$y(t) = \cos \left(\frac{9\pi}{2}t\right ) $$***In class-exercise: What is the (fundamental) frequency of $y(t)$?***
###Code
fs = 100 # Sampling rate of 1000 Hz
t_start = 0; t_stop = 4
ns = int((t_stop - t_start) * fs + 1)
x = np.linspace(0,4,ns)
y = np.cos(9*np.pi/2*x)
plt.figure("1000 Hz sampling rate")
plt.plot(x,y)
plt.title("1000 Hz sampling rate")
###Output
_____no_output_____
###Markdown
***In-class exercise: resampling at different rates***Now let's try sampling this signal at some different sampling rates:1. 100 Hz2. 10 Hz3. 1 HzHere's some sample code for doing 100 Hz. You could of course write a function that takes the desired sampling rate as an argument. Try all three sampling rates, and feel free to try some other as well.
###Code
ns_100 = int((t_stop - t_start) * 100 + 1)
x_100 = np.linspace(0,4,ns_100)
y_100 = np.cos(9*np.pi/2*x_100)
###Output
_____no_output_____
###Markdown
OK, so let's do some plotting to see what our samples are recovering from the original signal. Here's some sample code for plotting for 100 Hz sampling rate. ***Plot for your other sampling rates, too.***
###Code
plt.figure("100 Hz sampling rate, stem plot")
plt.xlim(0,1)
plt.plot(x,y)
markerline, stemlines, baseline = plt.stem(x_100,y_100, '-.', use_line_collection = True)
plt.setp(baseline, 'color', 'r', 'linewidth', 2)
plt.figure("100 Hz sampling rate, dots")
plt.xlim(0,2)
plt.plot(x,y, 'g.', x_100, y_100, 'ro')
###Output
_____no_output_____
###Markdown
Wow, we sure are missing a lot of data--could we still recover the original signal $y(t)$?$$y(t) = \cos \left(\frac{9\pi}{2}t\right ) $$***In-class exercise: can you think of a function $z(t)$ that has the same values as our $y(t)$ at the sampled timepoints when we sample with a rate of 1Hz? If so, plot it together with the original signal and the 1 Hz sampling points.*** To do this, you could change```plt.plot(x,y)```to something like this, where `z` is your definition of $z(t)$ and `x2` is a vector of the sampled time points for 1 Hz sampling rate:```plt.plot(x,y, 'g.', x2, z, 'ro-')```
###Code
plt.figure("1 Hz sampling rate, aliasing")
plt.plot(x,y) # change this to add in plot of z(t)
markerline, stemlines, baseline = plt.stem(x_1,y_1, '-.', use_line_collection = True) # 1Hz sampling rate
###Output
_____no_output_____
###Markdown
***In-class exercise: suppose you sample at a sampling rate of 4.5 Hz. Overlay the stem plot with the original signal for this sampling rate (like the previous plots).*** The sampling theoremThe minimal sampling rate that can be used to reconstruct a signal from its samples is two times the frequency of the highest frequency component $\nu_{max}$ in the signal: sampling rate $> 2\nu_{max}$The frequency 2$\nu_{max}$ is often called the **Nyquist frequency**.***In-class exercise: What is the Nyquist frequency for $y(t)$ below?***$$y(t) = \cos \left(\frac{9\pi}{2}t\right ) $$ So for a complex wave (a sum of sinusoids), increasing the frequency of the highest frequency component $\nu_{max}$ drives up the required sampling rate for reconstruction. Sometimes there is no highest frequency, e.g., in an infinite series like for a square wave.Here's a intuitive example to play with. Plot a signal composed of a low frequency sinusoid and a high frequency sinusoid. As the gap in frequencies between the two frequency components increases, the resulting complex wave looks closer and closer to the lower frequency component, with lots of squigglies up and down at the frequency of the higher frequency component.
###Code
def plot_play_summed_sines(f1 = 440, f2 = 880, t_start = 0, t_stop = 2, fs = 44100, xlim_max = 0.02):
x = np.linspace(t_start, t_stop, fs * (t_stop - t_start))
y1 = np.sin(2*np.pi*f1*x)
y2 = np.sin(2*np.pi*f2*x)
plt.xlim(t_start,xlim_max)
plt.plot(x , y1, "-g", label="y1")
plt.plot(x , y2, "-b", label="y2")
plt.plot(x , y1 + y2, "-r", label="y1+y2")
plt.legend(loc="upper right")
plt.xlabel('Time (s)')
plt.ylabel('Amplitude (dB)')
plt.title("Adding up sines")
display(Audio(data=y1, rate=fs))
display(Audio(data=y2, rate=fs))
display(Audio(data=y1+y2, rate=fs))
v = interactive(plot_play_summed_sines, f1=(50,200), f2=(1000,5000), t_start = (0,0), t_stop = (0,5), xlim_max = (0.01,0.1))
display(v)
###Output
_____no_output_____ |
section_2/02_neuron.ipynb | ###Markdown
ニューロンの実装単一のニューロンを、Pythonのコードで実装します。 ニューロンを表す関数Pythonの関数を使って、単一のニューロンを実装します。 以下のコードは、入力`x`と重み`w`の各要素をかけ合わせて総和をとり、バイアスを足し合わせて`u`としています。 今回は活性化関数としてステップ関数を使うので、`u`の値が0より小さい時は出力`y`は0、それ以外の場合は1とします。
###Code
import numpy as np
import matplotlib.pyplot as plt
def neuron(x, w, b): # x:入力 w:重み b:バイアス
u = np.sum(x*w) + b
y = 0 if u < 0 else 1 # ステップ関数
return y
# 練習用
###Output
_____no_output_____
###Markdown
ニューロンを使用する単一ニューロンの関数を使用し、様々な入力に対する反応を確認します。 以下のコードは2つの入力を用意し、それぞれを変化させてニューロンに入力し、出力を確認します。 出力は2次元配列に格納し、ライブラリmatplotlibを使って画像として表示します。
###Code
steps = 20 # 入力を変化させるステップ数
r = 1.0 # 入力を変化させる範囲(-1から1まで)
X1 = np.linspace(-r, r, steps) # 入力1
X2 = np.linspace(-r, r, steps) # 入力2
image = np.zeros((steps, steps)) # 出力を格納する2次元配列
w = np.array([-0.5, 0.5]) # 重み
b = 0 # バイアス
for i_1 in range(steps): # 入力1を変化させる
for i_2 in range(steps): # 入力2を変化させる
x = np.array([X1[i_1], X2[i_2]]) # 入力
image[i_1, i_2] = neuron(x, w, b) # 出力を配列に格納
plt.imshow(image, "gray", vmin=0.0, vmax=1.0) # 配列を画像として表示
plt.colorbar()
plt.xticks([0, steps-1], [-r, r]) # x軸ラベルの表示
plt.yticks([0, steps-1], [-r, r]) # y軸ラベルの表示
plt.show()
# 練習用
###Output
_____no_output_____ |
Perceptron Binary Classification Learning Algorithm Tutorial.ipynb | ###Markdown
Perceptron Binary Classification Learning Algorithm Tutorial註:這個 Tutorial 主要還是介紹怎麼使用 FukuML,如果非必要並不會涉入太多演算法或數學式的細節,若大家對機器學習有興趣,還是建議觀看完整的課程。Perceptron Binary Classification Learning Algorithm(PLA)是最基礎的機器學習算法,主要用在讓機器學習分類,基礎我們會使用在二元分類,再慢慢推廣至多元分類。其核心想法也不難,追根究底就是個知錯能改的演算法,只要有錯就修正分類器,直到不會犯錯為止。PLA 也是一個最基礎的類神經網路的運算神經元,現在很紅的 Deep Learning 的最基礎概念其實就是 PLA,因此了解 PLA 對未來學習機器學習這門課程是很有幫助的。底下列出幾個 PLA 相關的數學式,方便大家日後學習時查閱: PLA 假設$$h(x) = sign(w^Tx)$$表示 PLA 對資料每一個維度的權重假設,這個權重向量在式子中以 w 表示,所以我們利用 PLA 學習出最能夠分好類的 w 之後,將 x 丟進去這個 PLA 假設,它就會告訴你分類的結果。 PLA 犯錯 $$sign(w_t^Tx_{n(t)}) \neq y_{n(t)}$$表示 PLA 對哪個資料點是預測錯誤的,其實就是對目前的假設 $w_t$ 對 $x_{n(t)}$ 點進行內積再取正負號,如果與 $y_{n(t)}$ 不同,那就代表 PLA 犯錯了。 PLA 修正假設$$w_{t+1} = w_t + y_{n(t)}x_{n(t)}$$表示 PLA 犯錯之後怎麼修正,如果 PLA 猜 +1 但答案是 -1,那就往 $-1(x_{n(t)})$ 對 $w_t$ 做修正;如果 PLA 猜 -1 但答案是 +1,那就往 $+1(x_{n(t)})$ 對 $w_t$ 做修正。 使用 FukuML 的 PLA 做二元分類接下來讓我們一步一步學習如何使用 FukuML 的 PLA 來做二元分類,首先讓我們將 PLA 引進來:
###Code
import FukuML.PLA as pla
###Output
_____no_output_____
###Markdown
然後建構一個 PLA 二元分類物件:
###Code
pla_bc = pla.BinaryClassifier()
###Output
_____no_output_____
###Markdown
我希望 FukuML 能儘量簡單易用,因此大家只要牢記 1. 載入訓練資料 -> 2. 設定參數 -> 3. 初始化 -> 4. 訓練 -> 5. 預測 這五個步驟就可以完成機器學習了~現在第一個步驟要先載入訓練資料,但如果現在要讓大家生出一筆訓練資料應該會有困難,所以 FukuML 每個機器學習演算法都會有一個 Demo 用的內建資料,讓我們先用 Demo 用的內建資料來試試看。
###Code
pla_bc.load_train_data()
###Output
_____no_output_____
###Markdown
這樣就載入了 PLA 的 Demo 訓練資料,不信的話大家可以使用 `pla_bc.train_X` 及 `pla_bc.train_Y` 印出來看看:
###Code
print(pla_bc.train_X)
###Output
[[ 1. 0.97681 0.10723 0.64385 0.29556 ]
[ 1. 0.67194 0.2418 0.83075 0.42741 ]
[ 1. 0.20619 0.23321 0.81004 0.98691 ]
...,
[ 1. 0.50468 0.99699 0.75136 0.51681 ]
[ 1. 0.55852 0.067689 0.666 0.98482 ]
[ 1. 0.83188 0.66817 0.23403 0.72472 ]]
###Markdown
訓練資料的特徵資料就存在 `train_X` 中,矩陣的每一個列就代表一筆資料,然後每一個行就代表一個特徵值,請注意矩陣的第一行都是 1,這是我們演算法自己補上的 $x_0$,並不是原本訓練資料就會有的特徵值,以這個 Demo 資料來說,每筆資料只有 4 個特徵值(feature),像第一筆資料的 4 個特徵值就是 0.97681 0.10723 0.64385 0.29556,演算法將前面補上 $x_0 = 1$,就變成了現在看到的樣子。
###Code
print(pla_bc.train_Y)
###Output
[ 1. 1. 1. 1. 1. 1. -1. 1. -1. -1. 1. 1. 1. -1. -1. 1. 1. 1.
-1. 1. 1. 1. 1. 1. 1. 1. -1. 1. 1. -1. -1. 1. 1. -1. 1. 1.
-1. -1. 1. -1. -1. 1. -1. 1. 1. 1. -1. -1. 1. 1. 1. 1. 1. 1.
1. 1. 1. -1. -1. 1. -1. 1. -1. -1. 1. -1. 1. -1. -1. 1. 1. 1.
-1. 1. 1. 1. 1. 1. 1. -1. 1. 1. 1. -1. 1. 1. -1. 1. 1. 1.
1. 1. 1. 1. -1. 1. -1. 1. 1. -1. 1. 1. 1. 1. -1. 1. 1. 1.
1. -1. 1. -1. 1. 1. -1. 1. 1. 1. 1. -1. 1. -1. -1. -1. 1. 1.
1. 1. 1. 1. 1. -1. -1. 1. 1. -1. 1. -1. 1. 1. 1. -1. 1. -1.
-1. 1. -1. -1. 1. 1. 1. 1. -1. 1. 1. 1. 1. 1. 1. 1. 1. -1.
-1. -1. 1. -1. 1. -1. 1. -1. 1. 1. -1. -1. 1. -1. 1. 1. 1. 1.
1. 1. 1. 1. -1. 1. 1. -1. 1. 1. 1. 1. 1. -1. 1. 1. 1. 1.
1. 1. -1. -1. -1. -1. 1. -1. 1. 1. -1. 1. -1. -1. 1. 1. 1. 1.
1. 1. 1. -1. 1. -1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. -1.
1. -1. 1. 1. -1. 1. 1. 1. 1. -1. 1. -1. 1. 1. 1. 1. 1. -1.
1. -1. 1. 1. 1. -1. -1. 1. 1. 1. 1. 1. -1. 1. 1. 1. 1. 1.
1. 1. 1. 1. -1. 1. 1. 1. 1. -1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. -1. 1. -1. 1. 1. 1. -1. -1. -1. 1. 1. 1. 1.
1. 1. 1. 1. 1. -1. 1. 1. 1. -1. 1. 1. -1. -1. -1. 1. 1. -1.
-1. 1. -1. -1. -1. 1. 1. 1. 1. 1. 1. 1. 1. -1. 1. 1. 1. 1.
-1. 1. 1. -1. -1. 1. -1. 1. 1. -1. 1. 1. 1. 1. 1. -1. 1. 1.
1. 1. 1. 1. 1. 1. -1. 1. 1. 1. -1. 1. -1. 1. 1. 1. -1. 1.
1. 1. -1. -1. 1. 1. 1. 1. 1. 1. 1. 1.]
###Markdown
然後訓練資料的答案就存在 `train_Y` 中,也就代表每筆訓練資料的答案是什麼,正分類就是 1,負分類就是 -1。接下來讓我們進行下一步,設定參數:
###Code
pla_bc.set_param(loop_mode='naive_cycle', step_alpha=1)
###Output
_____no_output_____
###Markdown
PLA 這個演算法我只提供兩個參數可以調,一個是 `loop_mode`,用來調整 PLA 選擇訓練資料來檢查自己猜錯或猜對的選法,預設是使用 `naive_cycle` ,會照著訓練資料的順序一個一個檢測,有錯就修正 w。你也可以設成使用 `random`,這樣 PLA 檢測時就會隨便選擇一個點來檢測,有錯就修正 w。另一個參數是 `step_alpha`,用來調整 PLA 每次有錯就修正 w 要修正多少量,原則上設成 1 就可以了。接下來就可以再進行下一步,初始化:
###Code
pla_bc.init_W()
###Output
_____no_output_____
###Markdown
初始化時,我們可以得到一個最初的權重值 w,通常就是個 0 向量了,但有時我們可以用 Linear Regression 來初始化,加速演算法,之後我們會再介紹,一樣我們將初始化的 w 印出來看看:
###Code
print(pla_bc.W)
###Output
[ 0. 0. 0. 0. 0.]
###Markdown
好!果然是 0 向量,一切準備就緒,接下來就是重頭戲「訓練」了:
###Code
pla_bc.train()
###Output
_____no_output_____
###Markdown
登登登!訓練完成,我們會得到一個全新的權重值 w,根據 PLA 的運算,這個 w 可以將資料完全分類正確!這就是機器學習神奇的地方!我們一樣把 PLA 計算出來的 w 印出來看看:
###Code
print(pla_bc.W)
###Output
[-3. 3.0841436 -1.583081 2.391305 4.5287635]
###Markdown
果然不是個 0 向量了呢!有了這個 w,我們就可以用它來預測未來的資料,讓我拿一筆測試資料 0.97959 0.40402 0.96303 0.28133 1 來預測看看,前面 4 個值是這筆測試資料的特徵值,後面的 1 代表這筆測試資料的答案,我們來看看預測結果:
###Code
test_data = '0.97959 0.40402 0.96303 0.28133 1'
prediction = pla_bc.prediction(test_data)
###Output
_____no_output_____
###Markdown
將預測結果印出來看看:
###Code
print(prediction)
###Output
{'prediction': 1.0, 'input_data_x': array([ 1. , 0.97959, 0.40402, 0.96303, 0.28133]), 'input_data_y': 1.0}
###Markdown
prediction 這個方法會把預測結果回傳成一個 dictionary,預測結果的 key 是 prediction,value 是 1,測試資料的答案也是 1,所以 PLA 正確預測了結果!假設我們現在要預測的是未知的資料、一些我們還沒有分好類的資料,那我們就是把資料特徵值向量丟進去 prediction 方法,並設定 ``mode='future_data'`,代表是做未知資料的預測,就可以進行預測了,比如丟進去 0.29634 0.4012 0.40266 0.67864 這筆特徵資料試試看:
###Code
future_data = '0.29634 0.4012 0.40266 0.67864'
prediction = pla_bc.prediction(future_data, mode='future_data')
###Output
_____no_output_____
###Markdown
將預測結果印出來看看:
###Code
print(prediction)
###Output
{'prediction': 1.0, 'input_data_x': array([ 1. , 0.29634, 0.4012 , 0.40266, 0.67864]), 'input_data_y': None}
###Markdown
PLA 會忠實的觀察資料給出答案,它認為這筆資料的答案也是 1。(事實上真的是)當然,如果只是看一、兩筆資料猜對,大家可能會認為這只是運氣好,所以我們必須計算 PLA 在整個訓練資料集及整個測試資料集的預測表現如何。我們提供了很簡易的方法可以計算整體的錯誤率,如果要看 PLA 在整個訓練資料集的預測錯誤率($E_{in}$):
###Code
print(pla_bc.calculate_avg_error(pla_bc.train_X, pla_bc.train_Y, pla_bc.W))
###Output
0.0
###Markdown
PLA 在訓練資料的預測錯誤率是完美的 0!這是當然的,因為 PLA 在線性可分的資料裡,一定會調整到沒有錯誤為止。現在我們來看看 PLA 在整個測試資料集的預測錯誤率($E_{out}$),在此之前,我們必須先載入測試資料集,一樣 FukuML 有提供 Demo 版本的測試資料集:
###Code
pla_bc.load_test_data()
###Output
_____no_output_____
###Markdown
載入測試資料之後,我們就可以計算 PLA 在測試資料集的預測錯誤率($E_{out}$):
###Code
print(pla_bc.calculate_test_data_avg_error())
###Output
0.0
###Markdown
PLA 在測試資料的預測錯誤率也是完美的 0,當然這某種程度是因為我們的 Demo 資料有設計過,不過理論上測試資料的預測錯誤率應該不會和訓練資料的預測錯誤率差太多,只要實驗過程是一個客觀的過程、沒有經過人為的污染,機器學習的演算法的確可以做到正確的預測。以上,你大概已經學會使用 FukuML 提供的 PLA 做訓練,然後使用訓練完成的 w 來進行未知資料的預測了,真的五個步驟就可以做完了!很簡單吧! 使用自己的訓練資料集和測試資料集前面的教學我們是使用 FukuML 所提供的訓練資料集和測試資料集,真實情況你當然使用自己的資料,那要怎麼做呢?FukuML 提供了很簡易的方法可以讓大家載入自己的資料:```your_training_data_file = '/path/to/your/training_data/file'pla_bc.load_train_data(your_training_data_file)your_testing_data_file = '/path/to/your/testing_data/file'pla_bc.load_test_data(your_testing_data_file)```就是這麼簡單,讓我們來實際演示一下:
###Code
pla_bc = pla.BinaryClassifier()
pla_bc.load_train_data('/Users/fukuball/Projects/fuku-ml/FukuML/dataset/linear_separable_train.dat')
pla_bc.load_test_data('/Users/fukuball/Projects/fuku-ml/FukuML/dataset/linear_separable_test.dat')
###Output
_____no_output_____
###Markdown
看吧,都順利載入資料了,接下來的問題只剩下資料集的格式是怎麼樣,這個可以直接看 FukuML 提供的資料集一窺究竟:https://github.com/fukuball/fuku-ml/blob/master/FukuML/dataset/pla_binary_train.dat其實格式真的很簡單,就是將每筆資料的特徵值用空格隔開,然後放成一橫行,然後將這筆資料的答案用空格隔開放在最後,答案是正分類就是 1,負分類就是 -1,這樣就完成了。所以比如你想做銀行核卡預測,然後審核的特徵是年薪、年齡、性別,那假設小明年薪 100W、年齡 30、性別男性且通過核卡了,那這筆資料就是:100 30 1 1假設小華年薪 20W、年齡 25、性別男性,沒有通過核卡,這筆資料就是:20 25 1 -1假設小美年薪 30W、年齡 24、性別女性,有過核卡,這筆資料就是:30 24 0 1以此類推,簡簡單單、輕輕鬆鬆,大家就可以使用自己的資料來玩玩看機器學習囉~ 使用二維資料來幫助理解其實 PLA 分類演算法計算出來的 w 就是去找出一條可以將資料點完美分開的線,書本上的範例可能會使用二維的資料集並畫成圖示呈現給大家看,但在真實世界中,我們的資料通常不會只是二維的,這樣找出來的 w 就會是一個在高維空間將資料完美分類的超平面,我們很難在平面上呈現這樣的結果,因此還是請大家多去從抽象化的高維空間去思考機器學習的過程,不要遷就於圖示。不過如果你剛接觸機器學習,使用二維資料來慢慢理解機器學習演算法也是一個不錯的學習方法,我這邊稍微展示一下如何印出二維資料點及機器學習訓練出來的 w。載入資料點時,我們就可以在平面上印出所有的資料點,正分類印成紅色的,負分類印成藍色的:
###Code
%matplotlib inline
import FukuML.PLA as pla
import matplotlib.pyplot as plt
pla_bc = pla.BinaryClassifier()
pla_bc.load_train_data('/Users/fukuball/Projects/fuku-ml/FukuML/dataset/linear_separable_train.dat')
for idx, val in enumerate(pla_bc.train_Y):
if val==1:
plt.plot(pla_bc.train_X[idx,1], pla_bc.train_X[idx,2], "ro")
else:
plt.plot(pla_bc.train_X[idx,1], pla_bc.train_X[idx,2], "bo")
plt.axis("tight")
plt.show()
###Output
_____no_output_____
###Markdown
機器訓練完之後,我們可以得到 w,這時只要使用 $w_2*x_2+w_1*x_1+w_0*x_0=0$ 的線性方程式找出斜率,就可以在平面上畫出 w:
###Code
pla_bc.set_param(loop_mode='naive_cycle', step_alpha=1)
pla_bc.init_W()
pla_bc.train()
for idx, val in enumerate(pla_bc.train_Y):
if val==1:
plt.plot(pla_bc.train_X[idx,1], pla_bc.train_X[idx,2], "ro")
else:
plt.plot(pla_bc.train_X[idx,1], pla_bc.train_X[idx,2], "bo")
a0 = -4;
a1 = (-pla_bc.W[0]-pla_bc.W[1]*a0)/pla_bc.W[2]
b0 = 4;
b1 = (-pla_bc.W[0]-pla_bc.W[1]*b0)/pla_bc.W[2]
plt.plot([a0, b0], [a1, b1], "k")
plt.axis("tight")
plt.show()
###Output
_____no_output_____ |
cloud_tpu_colabs/Wave_Equation.ipynb | ###Markdown
Solving the wave equation on cloud TPUs[_Stephan Hoyer_](https://twitter.com/shoyer)In this notebook, we solve the 2D [wave equation](https://en.wikipedia.org/wiki/Wave_equation):$$\frac{\partial^2 u}{\partial t^2} = c^2 \nabla^2 u$$We use a simple [finite difference](https://en.wikipedia.org/wiki/Finite_difference_method) formulation with [Leapfrog time integration](https://en.wikipedia.org/wiki/Leapfrog_integration).Note: It is natural to express finite difference methods as convolutions, but here we intentionally avoid convolutions in favor of array indexing/arithmetic. This is because "batch" and "feature" dimensions in TPU convolutions are padded to multiples of either 8 and 128, but in our case both these dimensions are effectively of size 1. Setup required environment
###Code
# Grab other packages for this demo.
!pip install -U -q Pillow moviepy proglog scikit-image
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
###Output
_____no_output_____
###Markdown
Simulation code
###Code
from functools import partial
import jax
from jax import jit, pmap
from jax import lax
from jax import tree_util
import jax.numpy as jnp
import numpy as np
import matplotlib.pyplot as plt
import skimage.filters
import proglog
from moviepy.editor import ImageSequenceClip
device_count = jax.device_count()
# Spatial partitioning via halo exchange
def send_right(x, axis_name):
# Note: if some devices are omitted from the permutation, lax.ppermute
# provides zeros instead. This gives us an easy way to apply Dirichlet
# boundary conditions.
left_perm = [(i, (i + 1) % device_count) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def send_left(x, axis_name):
left_perm = [((i + 1) % device_count, i) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def axis_slice(ndim, index, axis):
slices = [slice(None)] * ndim
slices[axis] = index
return tuple(slices)
def slice_along_axis(array, index, axis):
return array[axis_slice(array.ndim, index, axis)]
def tree_vectorize(func):
def wrapper(x, *args, **kwargs):
return tree_util.tree_map(lambda x: func(x, *args, **kwargs), x)
return wrapper
@tree_vectorize
def halo_exchange_padding(array, padding=1, axis=0, axis_name='x'):
if not padding > 0:
raise ValueError(f'invalid padding: {padding}')
array = jnp.array(array)
if array.ndim == 0:
return array
left = slice_along_axis(array, slice(None, padding), axis)
right = slice_along_axis(array, slice(-padding, None), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
return jnp.concatenate([left, array, right], axis)
@tree_vectorize
def halo_exchange_inplace(array, padding=1, axis=0, axis_name='x'):
left = slice_along_axis(array, slice(padding, 2*padding), axis)
right = slice_along_axis(array, slice(-2*padding, -padding), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
array = jax.ops.index_update(
array, axis_slice(array.ndim, slice(None, padding), axis), left)
array = jax.ops.index_update(
array, axis_slice(array.ndim, slice(-padding, None), axis), right)
return array
# Reshaping inputs/outputs for pmap
def split_with_reshape(array, num_splits, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
tile_size, remainder = divmod(array.shape[split_axis], num_splits)
if remainder:
raise ValueError('num_splits must equally divide the dimension size')
new_shape = list(array.shape)
new_shape[split_axis] = tile_size
new_shape.insert(split_axis, num_splits)
return jnp.moveaxis(jnp.reshape(array, new_shape), split_axis, tile_id_axis)
def stack_with_reshape(array, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
array = jnp.moveaxis(array, tile_id_axis, split_axis)
new_shape = array.shape[:split_axis] + (-1,) + array.shape[split_axis+2:]
return jnp.reshape(array, new_shape)
def shard(func):
def wrapper(state):
sharded_state = tree_util.tree_map(
lambda x: split_with_reshape(x, device_count), state)
sharded_result = func(sharded_state)
result = tree_util.tree_map(stack_with_reshape, sharded_result)
return result
return wrapper
# Physics
def shift(array, offset, axis):
index = slice(offset, None) if offset >= 0 else slice(None, offset)
sliced = slice_along_axis(array, index, axis)
padding = [(0, 0)] * array.ndim
padding[axis] = (-min(offset, 0), max(offset, 0))
return jnp.pad(sliced, padding, mode='constant', constant_values=0)
def laplacian(array, step=1):
left = shift(array, +1, axis=0)
right = shift(array, -1, axis=0)
up = shift(array, +1, axis=1)
down = shift(array, -1, axis=1)
convolved = (left + right + up + down - 4 * array)
if step != 1:
convolved *= (1 / step ** 2)
return convolved
def scalar_wave_equation(u, c=1, dx=1):
return c ** 2 * laplacian(u, dx)
@jax.jit
def leapfrog_step(state, dt=0.5, c=1):
# https://en.wikipedia.org/wiki/Leapfrog_integration
u, u_t = state
u_tt = scalar_wave_equation(u, c)
u_t = u_t + u_tt * dt
u = u + u_t * dt
return (u, u_t)
# Time stepping
def multi_step(state, count, dt=1/jnp.sqrt(2), c=1):
return lax.fori_loop(0, count, lambda i, s: leapfrog_step(s, dt, c), state)
def multi_step_pmap(state, count, dt=1/jnp.sqrt(2), c=1, exchange_interval=1,
save_interval=1):
def exchange_and_multi_step(state_padded):
c_padded = halo_exchange_padding(c, exchange_interval)
evolved = multi_step(state_padded, exchange_interval, dt, c_padded)
return halo_exchange_inplace(evolved, exchange_interval)
@shard
@partial(jax.pmap, axis_name='x')
def simulate_until_output(state):
stop = save_interval // exchange_interval
state_padded = halo_exchange_padding(state, exchange_interval)
advanced = lax.fori_loop(
0, stop, lambda i, s: exchange_and_multi_step(s), state_padded)
xi = exchange_interval
return tree_util.tree_map(lambda array: array[xi:-xi, ...], advanced)
results = [state]
for _ in range(count // save_interval):
state = simulate_until_output(state)
tree_util.tree_map(lambda x: x.copy_to_host_async(), state)
results.append(state)
results = jax.device_get(results)
return tree_util.tree_multimap(lambda *xs: np.stack([np.array(x) for x in xs]), *results)
multi_step_jit = jax.jit(multi_step)
###Output
_____no_output_____
###Markdown
Initial conditions
###Code
x = jnp.linspace(0, 8, num=8*1024, endpoint=False)
y = jnp.linspace(0, 1, num=1*1024, endpoint=False)
x_mesh, y_mesh = jnp.meshgrid(x, y, indexing='ij')
# NOTE: smooth initial conditions are important, so we aren't exciting
# arbitrarily high frequencies (that cannot be resolved)
u = skimage.filters.gaussian(
((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) < 0.1 ** 2,
sigma=1)
# u = jnp.exp(-((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) / 0.1 ** 2)
# u = skimage.filters.gaussian(
# (x_mesh > 1/3) & (x_mesh < 1/2) & (y_mesh > 1/3) & (y_mesh < 1/2),
# sigma=5)
v = jnp.zeros_like(u)
c = 1 # could also use a 2D array matching the mesh shape
u.shape
###Output
_____no_output_____
###Markdown
Test scaling from 1 to 8 chips
###Code
%%time
# single TPU chip
u_final, _ = multi_step_jit((u, v), count=2**13, c=c, dt=0.5)
%%time
# 8x TPU chips, 4x more steps in roughly half the time!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.5, exchange_interval=4, save_interval=2**15)
18.3 / (10.3 / 4) # near linear scaling (8x would be perfect)
###Output
_____no_output_____
###Markdown
Save a bunch of outputs for a movie
###Code
%%time
# save more outputs for a movie -- this is slow!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.2, exchange_interval=4, save_interval=2**10)
u_final.shape
u_final.nbytes / 1e9
plt.figure(figsize=(18, 6))
plt.axis('off')
plt.imshow(u_final[-1].T, cmap='RdBu');
fig, axes = plt.subplots(9, 1, figsize=(14, 14))
[ax.axis('off') for ax in axes]
axes[0].imshow(u_final[0].T, cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
for i in range(8):
axes[i+1].imshow(u_final[4*i+1].T / abs(u_final[4*i+1]).max(), cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
import matplotlib.cm
import matplotlib.colors
from PIL import Image
def make_images(data, cmap='RdBu', vmax=None):
images = []
for frame in data:
if vmax is None:
this_vmax = np.max(abs(frame))
else:
this_vmax = vmax
norm = matplotlib.colors.Normalize(vmin=-this_vmax, vmax=this_vmax)
mappable = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)
rgba = mappable.to_rgba(frame, bytes=True)
image = Image.fromarray(rgba, mode='RGBA')
images.append(image)
return images
def save_movie(images, path, duration=100, loop=0, **kwargs):
images[0].save(path, save_all=True, append_images=images[1:],
duration=duration, loop=loop, **kwargs)
images = make_images(u_final[::, ::8, ::8].transpose(0, 2, 1))
# Show Movie
proglog.default_bar_logger = partial(proglog.default_bar_logger, None)
ImageSequenceClip([np.array(im) for im in images], fps=25).ipython_display()
# Save GIF.
save_movie(images,'wave_movie.gif', duration=[2000]+[200]*(len(images)-2)+[2000])
# The movie sometimes takes a second before showing up in the file system.
import time; time.sleep(1)
# Download animation.
try:
from google.colab import files
except ImportError:
pass
else:
files.download('wave_movie.gif')
###Output
_____no_output_____
###Markdown
Solving the wave equation on cloud TPUs[_Stephan Hoyer_](https://twitter.com/shoyer)In this notebook, we solve the 2D [wave equation](https://en.wikipedia.org/wiki/Wave_equation):$$\frac{\partial^2 u}{\partial t^2} = c^2 \nabla^2 u$$We use a simple [finite difference](https://en.wikipedia.org/wiki/Finite_difference_method) formulation with [Leapfrog time integration](https://en.wikipedia.org/wiki/Leapfrog_integration).Note: It is natural to express finite difference methods as convolutions, but here we intentionally avoid convolutions in favor of array indexing/arithmetic. This is because "batch" and "feature" dimensions in TPU convolutions are padded to multiples of either 8 and 128, but in our case both these dimensions are effectively of size 1. Setup required environment
###Code
# Grab other packages for this demo.
!pip install -U -q Pillow moviepy proglog scikit-image
from jax.tools import colab_tpu
colab_tpu.setup_tpu()
###Output
_____no_output_____
###Markdown
Simulation code
###Code
from functools import partial
import jax
from jax import jit, pmap
from jax import lax
from jax import tree_util
import jax.numpy as jnp
import numpy as np
import matplotlib.pyplot as plt
import skimage.filters
import proglog
from moviepy.editor import ImageSequenceClip
device_count = jax.device_count()
# Spatial partitioning via halo exchange
def send_right(x, axis_name):
# Note: if some devices are omitted from the permutation, lax.ppermute
# provides zeros instead. This gives us an easy way to apply Dirichlet
# boundary conditions.
left_perm = [(i, (i + 1) % device_count) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def send_left(x, axis_name):
left_perm = [((i + 1) % device_count, i) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def axis_slice(ndim, index, axis):
slices = [slice(None)] * ndim
slices[axis] = index
return tuple(slices)
def slice_along_axis(array, index, axis):
return array[axis_slice(array.ndim, index, axis)]
def tree_vectorize(func):
def wrapper(x, *args, **kwargs):
return tree_util.tree_map(lambda x: func(x, *args, **kwargs), x)
return wrapper
@tree_vectorize
def halo_exchange_padding(array, padding=1, axis=0, axis_name='x'):
if not padding > 0:
raise ValueError(f'invalid padding: {padding}')
array = jnp.array(array)
if array.ndim == 0:
return array
left = slice_along_axis(array, slice(None, padding), axis)
right = slice_along_axis(array, slice(-padding, None), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
return jnp.concatenate([left, array, right], axis)
@tree_vectorize
def halo_exchange_inplace(array, padding=1, axis=0, axis_name='x'):
left = slice_along_axis(array, slice(padding, 2*padding), axis)
right = slice_along_axis(array, slice(-2*padding, -padding), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
array = jax.ops.index_update(
array, axis_slice(array.ndim, slice(None, padding), axis), left)
array = jax.ops.index_update(
array, axis_slice(array.ndim, slice(-padding, None), axis), right)
return array
# Reshaping inputs/outputs for pmap
def split_with_reshape(array, num_splits, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
tile_size, remainder = divmod(array.shape[split_axis], num_splits)
if remainder:
raise ValueError('num_splits must equally divide the dimension size')
new_shape = list(array.shape)
new_shape[split_axis] = tile_size
new_shape.insert(split_axis, num_splits)
return jnp.moveaxis(jnp.reshape(array, new_shape), split_axis, tile_id_axis)
def stack_with_reshape(array, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
array = jnp.moveaxis(array, tile_id_axis, split_axis)
new_shape = array.shape[:split_axis] + (-1,) + array.shape[split_axis+2:]
return jnp.reshape(array, new_shape)
def shard(func):
def wrapper(state):
sharded_state = tree_util.tree_map(
lambda x: split_with_reshape(x, device_count), state)
sharded_result = func(sharded_state)
result = tree_util.tree_map(stack_with_reshape, sharded_result)
return result
return wrapper
# Physics
def shift(array, offset, axis):
index = slice(offset, None) if offset >= 0 else slice(None, offset)
sliced = slice_along_axis(array, index, axis)
padding = [(0, 0)] * array.ndim
padding[axis] = (-min(offset, 0), max(offset, 0))
return jnp.pad(sliced, padding, mode='constant', constant_values=0)
def laplacian(array, step=1):
left = shift(array, +1, axis=0)
right = shift(array, -1, axis=0)
up = shift(array, +1, axis=1)
down = shift(array, -1, axis=1)
convolved = (left + right + up + down - 4 * array)
if step != 1:
convolved *= (1 / step ** 2)
return convolved
def scalar_wave_equation(u, c=1, dx=1):
return c ** 2 * laplacian(u, dx)
@jax.jit
def leapfrog_step(state, dt=0.5, c=1):
# https://en.wikipedia.org/wiki/Leapfrog_integration
u, u_t = state
u_tt = scalar_wave_equation(u, c)
u_t = u_t + u_tt * dt
u = u + u_t * dt
return (u, u_t)
# Time stepping
def multi_step(state, count, dt=1/jnp.sqrt(2), c=1):
return lax.fori_loop(0, count, lambda i, s: leapfrog_step(s, dt, c), state)
def multi_step_pmap(state, count, dt=1/jnp.sqrt(2), c=1, exchange_interval=1,
save_interval=1):
def exchange_and_multi_step(state_padded):
c_padded = halo_exchange_padding(c, exchange_interval)
evolved = multi_step(state_padded, exchange_interval, dt, c_padded)
return halo_exchange_inplace(evolved, exchange_interval)
@shard
@partial(jax.pmap, axis_name='x')
def simulate_until_output(state):
stop = save_interval // exchange_interval
state_padded = halo_exchange_padding(state, exchange_interval)
advanced = lax.fori_loop(
0, stop, lambda i, s: exchange_and_multi_step(s), state_padded)
xi = exchange_interval
return tree_util.tree_map(lambda array: array[xi:-xi, ...], advanced)
results = [state]
for _ in range(count // save_interval):
state = simulate_until_output(state)
tree_util.tree_map(lambda x: x.copy_to_host_async(), state)
results.append(state)
results = jax.device_get(results)
return tree_util.tree_multimap(lambda *xs: np.stack([np.array(x) for x in xs]), *results)
multi_step_jit = jax.jit(multi_step)
###Output
_____no_output_____
###Markdown
Initial conditions
###Code
x = jnp.linspace(0, 8, num=8*1024, endpoint=False)
y = jnp.linspace(0, 1, num=1*1024, endpoint=False)
x_mesh, y_mesh = jnp.meshgrid(x, y, indexing='ij')
# NOTE: smooth initial conditions are important, so we aren't exciting
# arbitrarily high frequencies (that cannot be resolved)
u = skimage.filters.gaussian(
((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) < 0.1 ** 2,
sigma=1)
# u = jnp.exp(-((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) / 0.1 ** 2)
# u = skimage.filters.gaussian(
# (x_mesh > 1/3) & (x_mesh < 1/2) & (y_mesh > 1/3) & (y_mesh < 1/2),
# sigma=5)
v = jnp.zeros_like(u)
c = 1 # could also use a 2D array matching the mesh shape
u.shape
###Output
_____no_output_____
###Markdown
Test scaling from 1 to 8 chips
###Code
%%time
# single TPU chip
u_final, _ = multi_step_jit((u, v), count=2**13, c=c, dt=0.5)
%%time
# 8x TPU chips, 4x more steps in roughly half the time!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.5, exchange_interval=4, save_interval=2**15)
18.3 / (10.3 / 4) # near linear scaling (8x would be perfect)
###Output
_____no_output_____
###Markdown
Save a bunch of outputs for a movie
###Code
%%time
# save more outputs for a movie -- this is slow!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.2, exchange_interval=4, save_interval=2**10)
u_final.shape
u_final.nbytes / 1e9
plt.figure(figsize=(18, 6))
plt.axis('off')
plt.imshow(u_final[-1].T, cmap='RdBu');
fig, axes = plt.subplots(9, 1, figsize=(14, 14))
[ax.axis('off') for ax in axes]
axes[0].imshow(u_final[0].T, cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
for i in range(8):
axes[i+1].imshow(u_final[4*i+1].T / abs(u_final[4*i+1]).max(), cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
import matplotlib.cm
import matplotlib.colors
from PIL import Image
def make_images(data, cmap='RdBu', vmax=None):
images = []
for frame in data:
if vmax is None:
this_vmax = np.max(abs(frame))
else:
this_vmax = vmax
norm = matplotlib.colors.Normalize(vmin=-this_vmax, vmax=this_vmax)
mappable = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)
rgba = mappable.to_rgba(frame, bytes=True)
image = Image.fromarray(rgba, mode='RGBA')
images.append(image)
return images
def save_movie(images, path, duration=100, loop=0, **kwargs):
images[0].save(path, save_all=True, append_images=images[1:],
duration=duration, loop=loop, **kwargs)
images = make_images(u_final[::, ::8, ::8].transpose(0, 2, 1))
# Show Movie
proglog.default_bar_logger = partial(proglog.default_bar_logger, None)
ImageSequenceClip([np.array(im) for im in images], fps=25).ipython_display()
# Save GIF.
save_movie(images,'wave_movie.gif', duration=[2000]+[200]*(len(images)-2)+[2000])
# The movie sometimes takes a second before showing up in the file system.
import time; time.sleep(1)
# Download animation.
try:
from google.colab import files
except ImportError:
pass
else:
files.download('wave_movie.gif')
###Output
_____no_output_____
###Markdown
Solving the wave equation on cloud TPUs[_Stephan Hoyer_](https://twitter.com/shoyer)In this notebook, we solve the 2D [wave equation](https://en.wikipedia.org/wiki/Wave_equation):$$\frac{\partial^2 u}{\partial t^2} = c^2 \nabla^2 u$$We use a simple [finite difference](https://en.wikipedia.org/wiki/Finite_difference_method) formulation with [Leapfrog time integration](https://en.wikipedia.org/wiki/Leapfrog_integration).Note: It is natural to express finite difference methods as convolutions, but here we intentionally avoid convolutions in favor of array indexing/arithmetic. This is because "batch" and "feature" dimensions in TPU convolutions are padded to multiples of either 8 and 128, but in our case both these dimensions are effectively of size 1. Setup required environment
###Code
# Grab other packages for this demo.
!pip install -U -q Pillow moviepy proglog scikit-image
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
###Output
_____no_output_____
###Markdown
Simulation code
###Code
from functools import partial
import jax
from jax import jit, pmap
from jax import lax
from jax import tree_util
import jax.numpy as np
import numpy as onp
import matplotlib.pyplot as plt
import skimage.filters
import proglog
from moviepy.editor import ImageSequenceClip
device_count = jax.device_count()
# Spatial partitioning via halo exchange
def send_right(x, axis_name):
# Note: if some devices are omitted from the permutation, lax.ppermute
# provides zeros instead. This gives us an easy way to apply Dirichlet
# boundary conditions.
left_perm = [(i, (i + 1) % device_count) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def send_left(x, axis_name):
left_perm = [((i + 1) % device_count, i) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def axis_slice(ndim, index, axis):
slices = [slice(None)] * ndim
slices[axis] = index
return tuple(slices)
def slice_along_axis(array, index, axis):
return array[axis_slice(array.ndim, index, axis)]
def tree_vectorize(func):
def wrapper(x, *args, **kwargs):
return tree_util.tree_map(lambda x: func(x, *args, **kwargs), x)
return wrapper
@tree_vectorize
def halo_exchange_padding(array, padding=1, axis=0, axis_name='x'):
if not padding > 0:
raise ValueError(f'invalid padding: {padding}')
array = np.array(array)
if array.ndim == 0:
return array
left = slice_along_axis(array, slice(None, padding), axis)
right = slice_along_axis(array, slice(-padding, None), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
return np.concatenate([left, array, right], axis)
@tree_vectorize
def halo_exchange_inplace(array, padding=1, axis=0, axis_name='x'):
left = slice_along_axis(array, slice(padding, 2*padding), axis)
right = slice_along_axis(array, slice(-2*padding, -padding), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
array = jax.ops.index_update(
array, axis_slice(array.ndim, slice(None, padding), axis), left)
array = jax.ops.index_update(
array, axis_slice(array.ndim, slice(-padding, None), axis), right)
return array
# Reshaping inputs/outputs for pmap
def split_with_reshape(array, num_splits, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
tile_size, remainder = divmod(array.shape[split_axis], num_splits)
if remainder:
raise ValueError('num_splits must equally divide the dimension size')
new_shape = list(array.shape)
new_shape[split_axis] = tile_size
new_shape.insert(split_axis, num_splits)
return np.moveaxis(np.reshape(array, new_shape), split_axis, tile_id_axis)
def stack_with_reshape(array, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
array = np.moveaxis(array, tile_id_axis, split_axis)
new_shape = array.shape[:split_axis] + (-1,) + array.shape[split_axis+2:]
return np.reshape(array, new_shape)
def shard(func):
def wrapper(state):
sharded_state = tree_util.tree_map(
lambda x: split_with_reshape(x, device_count), state)
sharded_result = func(sharded_state)
result = tree_util.tree_map(stack_with_reshape, sharded_result)
return result
return wrapper
# Physics
def shift(array, offset, axis):
index = slice(offset, None) if offset >= 0 else slice(None, offset)
sliced = slice_along_axis(array, index, axis)
padding = [(0, 0)] * array.ndim
padding[axis] = (-min(offset, 0), max(offset, 0))
return np.pad(sliced, padding, mode='constant', constant_values=0)
def laplacian(array, step=1):
left = shift(array, +1, axis=0)
right = shift(array, -1, axis=0)
up = shift(array, +1, axis=1)
down = shift(array, -1, axis=1)
convolved = (left + right + up + down - 4 * array)
if step != 1:
convolved *= (1 / step ** 2)
return convolved
def scalar_wave_equation(u, c=1, dx=1):
return c ** 2 * laplacian(u, dx)
@jax.jit
def leapfrog_step(state, dt=0.5, c=1):
# https://en.wikipedia.org/wiki/Leapfrog_integration
u, u_t = state
u_tt = scalar_wave_equation(u, c)
u_t = u_t + u_tt * dt
u = u + u_t * dt
return (u, u_t)
# Time stepping
def multi_step(state, count, dt=1/np.sqrt(2), c=1):
return lax.fori_loop(0, count, lambda i, s: leapfrog_step(s, dt, c), state)
def multi_step_pmap(state, count, dt=1/np.sqrt(2), c=1, exchange_interval=1,
save_interval=1):
def exchange_and_multi_step(state_padded):
c_padded = halo_exchange_padding(c, exchange_interval)
evolved = multi_step(state_padded, exchange_interval, dt, c_padded)
return halo_exchange_inplace(evolved, exchange_interval)
@shard
@partial(jax.pmap, axis_name='x')
def simulate_until_output(state):
stop = save_interval // exchange_interval
state_padded = halo_exchange_padding(state, exchange_interval)
advanced = lax.fori_loop(
0, stop, lambda i, s: exchange_and_multi_step(s), state_padded)
xi = exchange_interval
return tree_util.tree_map(lambda array: array[xi:-xi, ...], advanced)
results = [state]
for _ in range(count // save_interval):
state = simulate_until_output(state)
tree_util.tree_map(lambda x: x.copy_to_host_async(), state)
results.append(state)
results = jax.device_get(results)
return tree_util.tree_multimap(lambda *xs: onp.stack([onp.array(x) for x in xs]), *results)
multi_step_jit = jax.jit(multi_step)
###Output
_____no_output_____
###Markdown
Initial conditions
###Code
x = np.linspace(0, 8, num=8*1024, endpoint=False)
y = np.linspace(0, 1, num=1*1024, endpoint=False)
x_mesh, y_mesh = np.meshgrid(x, y, indexing='ij')
# NOTE: smooth initial conditions are important, so we aren't exciting
# arbitrarily high frequencies (that cannot be resolved)
u = skimage.filters.gaussian(
((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) < 0.1 ** 2,
sigma=1)
# u = np.exp(-((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) / 0.1 ** 2)
# u = skimage.filters.gaussian(
# (x_mesh > 1/3) & (x_mesh < 1/2) & (y_mesh > 1/3) & (y_mesh < 1/2),
# sigma=5)
v = np.zeros_like(u)
c = 1 # could also use a 2D array matching the mesh shape
u.shape
###Output
_____no_output_____
###Markdown
Test scaling from 1 to 8 chips
###Code
%%time
# single TPU chip
u_final, _ = multi_step_jit((u, v), count=2**13, c=c, dt=0.5)
%%time
# 8x TPU chips, 4x more steps in roughly half the time!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.5, exchange_interval=4, save_interval=2**15)
18.3 / (10.3 / 4) # near linear scaling (8x would be perfect)
###Output
_____no_output_____
###Markdown
Save a bunch of outputs for a movie
###Code
%%time
# save more outputs for a movie -- this is slow!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.2, exchange_interval=4, save_interval=2**10)
u_final.shape
u_final.nbytes / 1e9
plt.figure(figsize=(18, 6))
plt.axis('off')
plt.imshow(u_final[-1].T, cmap='RdBu');
fig, axes = plt.subplots(9, 1, figsize=(14, 14))
[ax.axis('off') for ax in axes]
axes[0].imshow(u_final[0].T, cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
for i in range(8):
axes[i+1].imshow(u_final[4*i+1].T / abs(u_final[4*i+1]).max(), cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
import matplotlib.cm
import matplotlib.colors
from PIL import Image
def make_images(data, cmap='RdBu', vmax=None):
images = []
for frame in data:
if vmax is None:
this_vmax = onp.max(abs(frame))
else:
this_vmax = vmax
norm = matplotlib.colors.Normalize(vmin=-this_vmax, vmax=this_vmax)
mappable = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)
rgba = mappable.to_rgba(frame, bytes=True)
image = Image.fromarray(rgba, mode='RGBA')
images.append(image)
return images
def save_movie(images, path, duration=100, loop=0, **kwargs):
images[0].save(path, save_all=True, append_images=images[1:],
duration=duration, loop=loop, **kwargs)
images = make_images(u_final[::, ::8, ::8].transpose(0, 2, 1))
# Show Movie
proglog.default_bar_logger = partial(proglog.default_bar_logger, None)
ImageSequenceClip([onp.array(im) for im in images], fps=25).ipython_display()
# Save GIF.
save_movie(images,'wave_movie.gif', duration=[2000]+[200]*(len(images)-2)+[2000])
# The movie sometimes takes a second before showing up in the file system.
import time; time.sleep(1)
# Download animation.
try:
from google.colab import files
except ImportError:
pass
else:
files.download('wave_movie.gif')
###Output
_____no_output_____
###Markdown
Solving the wave equation on cloud TPUs[_Stephan Hoyer_](https://twitter.com/shoyer)In this notebook, we solve the 2D [wave equation](https://en.wikipedia.org/wiki/Wave_equation):$$\frac{\partial^2 u}{\partial t^2} = c^2 \nabla^2 u$$We use a simple [finite difference](https://en.wikipedia.org/wiki/Finite_difference_method) formulation with [Leapfrog time integration](https://en.wikipedia.org/wiki/Leapfrog_integration).Note: It is natural to express finite difference methods as convolutions, but here we intentionally avoid convolutions in favor of array indexing/arithmetic. This is because "batch" and "feature" dimensions in TPU convolutions are padded to multiples of either 8 and 128, but in our case both these dimensions are effectively of size 1. Setup required environment
###Code
# Grab other packages for this demo.
!pip install -U -q Pillow moviepy proglog scikit-image
import jax.tools.colab_tpu
jax.tools.colab_tpu.setup_tpu()
###Output
_____no_output_____
###Markdown
Simulation code
###Code
from functools import partial
import jax
from jax import jit, pmap
from jax import lax
from jax import tree_util
import jax.numpy as jnp
import numpy as np
import matplotlib.pyplot as plt
import skimage.filters
import proglog
from moviepy.editor import ImageSequenceClip
device_count = jax.device_count()
# Spatial partitioning via halo exchange
def send_right(x, axis_name):
# Note: if some devices are omitted from the permutation, lax.ppermute
# provides zeros instead. This gives us an easy way to apply Dirichlet
# boundary conditions.
left_perm = [(i, (i + 1) % device_count) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def send_left(x, axis_name):
left_perm = [((i + 1) % device_count, i) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def axis_slice(ndim, index, axis):
slices = [slice(None)] * ndim
slices[axis] = index
return tuple(slices)
def slice_along_axis(array, index, axis):
return array[axis_slice(array.ndim, index, axis)]
def tree_vectorize(func):
def wrapper(x, *args, **kwargs):
return tree_util.tree_map(lambda x: func(x, *args, **kwargs), x)
return wrapper
@tree_vectorize
def halo_exchange_padding(array, padding=1, axis=0, axis_name='x'):
if not padding > 0:
raise ValueError(f'invalid padding: {padding}')
array = jnp.array(array)
if array.ndim == 0:
return array
left = slice_along_axis(array, slice(None, padding), axis)
right = slice_along_axis(array, slice(-padding, None), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
return jnp.concatenate([left, array, right], axis)
@tree_vectorize
def halo_exchange_inplace(array, padding=1, axis=0, axis_name='x'):
left = slice_along_axis(array, slice(padding, 2*padding), axis)
right = slice_along_axis(array, slice(-2*padding, -padding), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
array = array.at[axis_slice(array.ndim, slice(None, padding), axis)].set(left)
array = array.at[axis_slice(array.ndim, slice(-padding, None), axis)].set(right)
return array
# Reshaping inputs/outputs for pmap
def split_with_reshape(array, num_splits, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
tile_size, remainder = divmod(array.shape[split_axis], num_splits)
if remainder:
raise ValueError('num_splits must equally divide the dimension size')
new_shape = list(array.shape)
new_shape[split_axis] = tile_size
new_shape.insert(split_axis, num_splits)
return jnp.moveaxis(jnp.reshape(array, new_shape), split_axis, tile_id_axis)
def stack_with_reshape(array, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
array = jnp.moveaxis(array, tile_id_axis, split_axis)
new_shape = array.shape[:split_axis] + (-1,) + array.shape[split_axis+2:]
return jnp.reshape(array, new_shape)
def shard(func):
def wrapper(state):
sharded_state = tree_util.tree_map(
lambda x: split_with_reshape(x, device_count), state)
sharded_result = func(sharded_state)
result = tree_util.tree_map(stack_with_reshape, sharded_result)
return result
return wrapper
# Physics
def shift(array, offset, axis):
index = slice(offset, None) if offset >= 0 else slice(None, offset)
sliced = slice_along_axis(array, index, axis)
padding = [(0, 0)] * array.ndim
padding[axis] = (-min(offset, 0), max(offset, 0))
return jnp.pad(sliced, padding, mode='constant', constant_values=0)
def laplacian(array, step=1):
left = shift(array, +1, axis=0)
right = shift(array, -1, axis=0)
up = shift(array, +1, axis=1)
down = shift(array, -1, axis=1)
convolved = (left + right + up + down - 4 * array)
if step != 1:
convolved *= (1 / step ** 2)
return convolved
def scalar_wave_equation(u, c=1, dx=1):
return c ** 2 * laplacian(u, dx)
@jax.jit
def leapfrog_step(state, dt=0.5, c=1):
# https://en.wikipedia.org/wiki/Leapfrog_integration
u, u_t = state
u_tt = scalar_wave_equation(u, c)
u_t = u_t + u_tt * dt
u = u + u_t * dt
return (u, u_t)
# Time stepping
def multi_step(state, count, dt=1/jnp.sqrt(2), c=1):
return lax.fori_loop(0, count, lambda i, s: leapfrog_step(s, dt, c), state)
def multi_step_pmap(state, count, dt=1/jnp.sqrt(2), c=1, exchange_interval=1,
save_interval=1):
def exchange_and_multi_step(state_padded):
c_padded = halo_exchange_padding(c, exchange_interval)
evolved = multi_step(state_padded, exchange_interval, dt, c_padded)
return halo_exchange_inplace(evolved, exchange_interval)
@shard
@partial(jax.pmap, axis_name='x')
def simulate_until_output(state):
stop = save_interval // exchange_interval
state_padded = halo_exchange_padding(state, exchange_interval)
advanced = lax.fori_loop(
0, stop, lambda i, s: exchange_and_multi_step(s), state_padded)
xi = exchange_interval
return tree_util.tree_map(lambda array: array[xi:-xi, ...], advanced)
results = [state]
for _ in range(count // save_interval):
state = simulate_until_output(state)
tree_util.tree_map(lambda x: x.copy_to_host_async(), state)
results.append(state)
results = jax.device_get(results)
return tree_util.tree_map(lambda *xs: np.stack([np.array(x) for x in xs]), *results)
multi_step_jit = jax.jit(multi_step)
###Output
_____no_output_____
###Markdown
Initial conditions
###Code
x = jnp.linspace(0, 8, num=8*1024, endpoint=False)
y = jnp.linspace(0, 1, num=1*1024, endpoint=False)
x_mesh, y_mesh = jnp.meshgrid(x, y, indexing='ij')
# NOTE: smooth initial conditions are important, so we aren't exciting
# arbitrarily high frequencies (that cannot be resolved)
u = skimage.filters.gaussian(
((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) < 0.1 ** 2,
sigma=1)
# u = jnp.exp(-((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) / 0.1 ** 2)
# u = skimage.filters.gaussian(
# (x_mesh > 1/3) & (x_mesh < 1/2) & (y_mesh > 1/3) & (y_mesh < 1/2),
# sigma=5)
v = jnp.zeros_like(u)
c = 1 # could also use a 2D array matching the mesh shape
u.shape
###Output
_____no_output_____
###Markdown
Test scaling from 1 to 8 chips
###Code
%%time
# single TPU chip
u_final, _ = multi_step_jit((u, v), count=2**13, c=c, dt=0.5)
%%time
# 8x TPU chips, 4x more steps in roughly half the time!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.5, exchange_interval=4, save_interval=2**15)
18.3 / (10.3 / 4) # near linear scaling (8x would be perfect)
###Output
_____no_output_____
###Markdown
Save a bunch of outputs for a movie
###Code
%%time
# save more outputs for a movie -- this is slow!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.2, exchange_interval=4, save_interval=2**10)
u_final.shape
u_final.nbytes / 1e9
plt.figure(figsize=(18, 6))
plt.axis('off')
plt.imshow(u_final[-1].T, cmap='RdBu');
fig, axes = plt.subplots(9, 1, figsize=(14, 14))
[ax.axis('off') for ax in axes]
axes[0].imshow(u_final[0].T, cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
for i in range(8):
axes[i+1].imshow(u_final[4*i+1].T / abs(u_final[4*i+1]).max(), cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
import matplotlib.cm
import matplotlib.colors
from PIL import Image
def make_images(data, cmap='RdBu', vmax=None):
images = []
for frame in data:
if vmax is None:
this_vmax = np.max(abs(frame))
else:
this_vmax = vmax
norm = matplotlib.colors.Normalize(vmin=-this_vmax, vmax=this_vmax)
mappable = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)
rgba = mappable.to_rgba(frame, bytes=True)
image = Image.fromarray(rgba, mode='RGBA')
images.append(image)
return images
def save_movie(images, path, duration=100, loop=0, **kwargs):
images[0].save(path, save_all=True, append_images=images[1:],
duration=duration, loop=loop, **kwargs)
images = make_images(u_final[::, ::8, ::8].transpose(0, 2, 1))
# Show Movie
proglog.default_bar_logger = partial(proglog.default_bar_logger, None)
ImageSequenceClip([np.array(im) for im in images], fps=25).ipython_display()
# Save GIF.
save_movie(images,'wave_movie.gif', duration=[2000]+[200]*(len(images)-2)+[2000])
# The movie sometimes takes a second before showing up in the file system.
import time; time.sleep(1)
# Download animation.
try:
from google.colab import files
except ImportError:
pass
else:
files.download('wave_movie.gif')
###Output
_____no_output_____
###Markdown
Solving the wave equation on cloud TPUs[_Stephan Hoyer_](https://twitter.com/shoyer)In this notebook, we solve the 2D [wave equation](https://en.wikipedia.org/wiki/Wave_equation):$$\frac{\partial^2 u}{\partial t^2} = c^2 \nabla^2 u$$We use a simple [finite difference](https://en.wikipedia.org/wiki/Finite_difference_method) formulation with [Leapfrog time integration](https://en.wikipedia.org/wiki/Leapfrog_integration).Note: It is natural to express finite difference methods as convolutions, but here we intentionally avoid convolutions in favor of array indexing/arithmetic. This is because "batch" and "feature" dimensions in TPU convolutions are padded to multiples of either 8 and 128, but in our case both these dimensions are effectively of size 1. Setup required environment
###Code
# Grab other packages for this demo.
!pip install -U -q Pillow moviepy proglog
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
###Output
_____no_output_____
###Markdown
Simulation code
###Code
from functools import partial
import jax
from jax import jit, pmap
from jax import lax
from jax import tree_util
import jax.numpy as np
import numpy as onp
import matplotlib.pyplot as plt
import skimage.filters
import proglog
from moviepy.editor import ImageSequenceClip
device_count = jax.device_count()
# Spatial partitioning via halo exchange
def send_right(x, axis_name):
# Note: if some devices are omitted from the permutation, lax.ppermute
# provides zeros instead. This gives us an easy way to apply Dirichlet
# boundary conditions.
left_perm = [(i, (i + 1) % device_count) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def send_left(x, axis_name):
left_perm = [((i + 1) % device_count, i) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def axis_slice(ndim, index, axis):
slices = [slice(None)] * ndim
slices[axis] = index
return tuple(slices)
def slice_along_axis(array, index, axis):
return array[axis_slice(array.ndim, index, axis)]
def tree_vectorize(func):
def wrapper(x, *args, **kwargs):
return tree_util.tree_map(lambda x: func(x, *args, **kwargs), x)
return wrapper
@tree_vectorize
def halo_exchange_padding(array, padding=1, axis=0, axis_name='x'):
if not padding > 0:
raise ValueError(f'invalid padding: {padding}')
array = np.array(array)
if array.ndim == 0:
return array
left = slice_along_axis(array, slice(None, padding), axis)
right = slice_along_axis(array, slice(-padding, None), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
return np.concatenate([left, array, right], axis)
@tree_vectorize
def halo_exchange_inplace(array, padding=1, axis=0, axis_name='x'):
left = slice_along_axis(array, slice(padding, 2*padding), axis)
right = slice_along_axis(array, slice(-2*padding, -padding), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
array = jax.ops.index_update(
array, axis_slice(array.ndim, slice(None, padding), axis), left)
array = jax.ops.index_update(
array, axis_slice(array.ndim, slice(-padding, None), axis), right)
return array
# Reshaping inputs/outputs for pmap
def split_with_reshape(array, num_splits, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
tile_size, remainder = divmod(array.shape[split_axis], num_splits)
if remainder:
raise ValueError('num_splits must equally divide the dimension size')
new_shape = list(array.shape)
new_shape[split_axis] = tile_size
new_shape.insert(split_axis, num_splits)
return np.moveaxis(np.reshape(array, new_shape), split_axis, tile_id_axis)
def stack_with_reshape(array, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
array = np.moveaxis(array, tile_id_axis, split_axis)
new_shape = array.shape[:split_axis] + (-1,) + array.shape[split_axis+2:]
return np.reshape(array, new_shape)
def shard(func):
def wrapper(state):
sharded_state = tree_util.tree_map(
lambda x: split_with_reshape(x, device_count), state)
sharded_result = func(sharded_state)
result = tree_util.tree_map(stack_with_reshape, sharded_result)
return result
return wrapper
# Physics
def shift(array, offset, axis):
index = slice(offset, None) if offset >= 0 else slice(None, offset)
sliced = slice_along_axis(array, index, axis)
padding = [(0, 0)] * array.ndim
padding[axis] = (-min(offset, 0), max(offset, 0))
return np.pad(sliced, padding, mode='constant', constant_values=0)
def laplacian(array, step=1):
left = shift(array, +1, axis=0)
right = shift(array, -1, axis=0)
up = shift(array, +1, axis=1)
down = shift(array, -1, axis=1)
convolved = (left + right + up + down - 4 * array)
if step != 1:
convolved *= (1 / step ** 2)
return convolved
def scalar_wave_equation(u, c=1, dx=1):
return c ** 2 * laplacian(u, dx)
@jax.jit
def leapfrog_step(state, dt=0.5, c=1):
# https://en.wikipedia.org/wiki/Leapfrog_integration
u, u_t = state
u_tt = scalar_wave_equation(u, c)
u_t = u_t + u_tt * dt
u = u + u_t * dt
return (u, u_t)
# Time stepping
def multi_step(state, count, dt=1/np.sqrt(2), c=1):
return lax.fori_loop(0, count, lambda i, s: leapfrog_step(s, dt, c), state)
def multi_step_pmap(state, count, dt=1/np.sqrt(2), c=1, exchange_interval=1,
save_interval=1):
def exchange_and_multi_step(state_padded):
c_padded = halo_exchange_padding(c, exchange_interval)
evolved = multi_step(state_padded, exchange_interval, dt, c_padded)
return halo_exchange_inplace(evolved, exchange_interval)
@shard
@partial(jax.pmap, axis_name='x')
def simulate_until_output(state):
stop = save_interval // exchange_interval
state_padded = halo_exchange_padding(state, exchange_interval)
advanced = lax.fori_loop(
0, stop, lambda i, s: exchange_and_multi_step(s), state_padded)
xi = exchange_interval
return tree_util.tree_map(lambda array: array[xi:-xi, ...], advanced)
results = [state]
for _ in range(count // save_interval):
state = simulate_until_output(state)
tree_util.tree_map(lambda x: x.copy_to_host_async(), state)
results.append(state)
results = jax.device_get(results)
return tree_util.tree_multimap(lambda *xs: onp.stack([onp.array(x) for x in xs]), *results)
multi_step_jit = jax.jit(multi_step)
###Output
_____no_output_____
###Markdown
Initial conditions
###Code
x = np.linspace(0, 8, num=8*1024, endpoint=False)
y = np.linspace(0, 1, num=1*1024, endpoint=False)
x_mesh, y_mesh = np.meshgrid(x, y, indexing='ij')
# NOTE: smooth initial conditions are important, so we aren't exciting
# arbitrarily high frequencies (that cannot be resolved)
u = skimage.filters.gaussian(
((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) < 0.1 ** 2,
sigma=1)
# u = np.exp(-((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) / 0.1 ** 2)
# u = skimage.filters.gaussian(
# (x_mesh > 1/3) & (x_mesh < 1/2) & (y_mesh > 1/3) & (y_mesh < 1/2),
# sigma=5)
v = np.zeros_like(u)
c = 1 # could also use a 2D array matching the mesh shape
u.shape
###Output
_____no_output_____
###Markdown
Test scaling from 1 to 8 chips
###Code
%%time
# single TPU chip
u_final, _ = multi_step_jit((u, v), count=2**13, c=c, dt=0.5)
%%time
# 8x TPU chips, 4x more steps in roughly half the time!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.5, exchange_interval=4, save_interval=2**15)
18.3 / (10.3 / 4) # near linear scaling (8x would be perfect)
###Output
_____no_output_____
###Markdown
Save a bunch of outputs for a movie
###Code
%%time
# save more outputs for a movie -- this is slow!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.2, exchange_interval=4, save_interval=2**10)
u_final.shape
u_final.nbytes / 1e9
plt.figure(figsize=(18, 6))
plt.axis('off')
plt.imshow(u_final[-1].T, cmap='RdBu');
fig, axes = plt.subplots(9, 1, figsize=(14, 14))
[ax.axis('off') for ax in axes]
axes[0].imshow(u_final[0].T, cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
for i in range(8):
axes[i+1].imshow(u_final[4*i+1].T / abs(u_final[4*i+1]).max(), cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
import matplotlib.cm
import matplotlib.colors
from PIL import Image
def make_images(data, cmap='RdBu', vmax=None):
images = []
for frame in data:
if vmax is None:
this_vmax = onp.max(abs(frame))
else:
this_vmax = vmax
norm = matplotlib.colors.Normalize(vmin=-this_vmax, vmax=this_vmax)
mappable = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)
rgba = mappable.to_rgba(frame, bytes=True)
image = Image.fromarray(rgba, mode='RGBA')
images.append(image)
return images
def save_movie(images, path, duration=100, loop=0, **kwargs):
images[0].save(path, save_all=True, append_images=images[1:],
duration=duration, loop=loop, **kwargs)
images = make_images(u_final[::, ::8, ::8].transpose(0, 2, 1))
# Show Movie
proglog.default_bar_logger = partial(proglog.default_bar_logger, None)
ImageSequenceClip([onp.array(im) for im in images], fps=25).ipython_display()
# Save GIF.
save_movie(images,'wave_movie.gif', duration=[2000]+[200]*(len(images)-2)+[2000])
# The movie sometimes takes a second before showing up in the file system.
import time; time.sleep(1)
# Download animation.
from google.colab import files
files.download('wave_movie.gif')
###Output
_____no_output_____
###Markdown
Solving the wave equation on cloud TPUs[_Stephan Hoyer_](https://twitter.com/shoyer)In this notebook, we solve the 2D [wave equation](https://en.wikipedia.org/wiki/Wave_equation):$$\frac{\partial^2 u}{\partial t^2} = c^2 \nabla^2 u$$We use a simple [finite difference](https://en.wikipedia.org/wiki/Finite_difference_method) formulation with [Leapfrog time integration](https://en.wikipedia.org/wiki/Leapfrog_integration).Note: It is natural to express finite difference methods as convolutions, but here we intentionally avoid convolutions in favor of array indexing/arithmetic. This is because "batch" and "feature" dimensions in TPU convolutions are padded to multiples of either 8 and 128, but in our case both these dimensions are effectively of size 1. Setup required environment
###Code
# Grab other packages for this demo.
!pip install -U -q Pillow moviepy proglog scikit-image
import jax.tools.colab_tpu
jax.tools.colab_tpu.setup_tpu()
###Output
_____no_output_____
###Markdown
Simulation code
###Code
from functools import partial
import jax
from jax import jit, pmap
from jax import lax
from jax import tree_util
import jax.numpy as jnp
import numpy as np
import matplotlib.pyplot as plt
import skimage.filters
import proglog
from moviepy.editor import ImageSequenceClip
device_count = jax.device_count()
# Spatial partitioning via halo exchange
def send_right(x, axis_name):
# Note: if some devices are omitted from the permutation, lax.ppermute
# provides zeros instead. This gives us an easy way to apply Dirichlet
# boundary conditions.
left_perm = [(i, (i + 1) % device_count) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def send_left(x, axis_name):
left_perm = [((i + 1) % device_count, i) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def axis_slice(ndim, index, axis):
slices = [slice(None)] * ndim
slices[axis] = index
return tuple(slices)
def slice_along_axis(array, index, axis):
return array[axis_slice(array.ndim, index, axis)]
def tree_vectorize(func):
def wrapper(x, *args, **kwargs):
return tree_util.tree_map(lambda x: func(x, *args, **kwargs), x)
return wrapper
@tree_vectorize
def halo_exchange_padding(array, padding=1, axis=0, axis_name='x'):
if not padding > 0:
raise ValueError(f'invalid padding: {padding}')
array = jnp.array(array)
if array.ndim == 0:
return array
left = slice_along_axis(array, slice(None, padding), axis)
right = slice_along_axis(array, slice(-padding, None), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
return jnp.concatenate([left, array, right], axis)
@tree_vectorize
def halo_exchange_inplace(array, padding=1, axis=0, axis_name='x'):
left = slice_along_axis(array, slice(padding, 2*padding), axis)
right = slice_along_axis(array, slice(-2*padding, -padding), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
array = jax.ops.index_update(
array, axis_slice(array.ndim, slice(None, padding), axis), left)
array = jax.ops.index_update(
array, axis_slice(array.ndim, slice(-padding, None), axis), right)
return array
# Reshaping inputs/outputs for pmap
def split_with_reshape(array, num_splits, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
tile_size, remainder = divmod(array.shape[split_axis], num_splits)
if remainder:
raise ValueError('num_splits must equally divide the dimension size')
new_shape = list(array.shape)
new_shape[split_axis] = tile_size
new_shape.insert(split_axis, num_splits)
return jnp.moveaxis(jnp.reshape(array, new_shape), split_axis, tile_id_axis)
def stack_with_reshape(array, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
array = jnp.moveaxis(array, tile_id_axis, split_axis)
new_shape = array.shape[:split_axis] + (-1,) + array.shape[split_axis+2:]
return jnp.reshape(array, new_shape)
def shard(func):
def wrapper(state):
sharded_state = tree_util.tree_map(
lambda x: split_with_reshape(x, device_count), state)
sharded_result = func(sharded_state)
result = tree_util.tree_map(stack_with_reshape, sharded_result)
return result
return wrapper
# Physics
def shift(array, offset, axis):
index = slice(offset, None) if offset >= 0 else slice(None, offset)
sliced = slice_along_axis(array, index, axis)
padding = [(0, 0)] * array.ndim
padding[axis] = (-min(offset, 0), max(offset, 0))
return jnp.pad(sliced, padding, mode='constant', constant_values=0)
def laplacian(array, step=1):
left = shift(array, +1, axis=0)
right = shift(array, -1, axis=0)
up = shift(array, +1, axis=1)
down = shift(array, -1, axis=1)
convolved = (left + right + up + down - 4 * array)
if step != 1:
convolved *= (1 / step ** 2)
return convolved
def scalar_wave_equation(u, c=1, dx=1):
return c ** 2 * laplacian(u, dx)
@jax.jit
def leapfrog_step(state, dt=0.5, c=1):
# https://en.wikipedia.org/wiki/Leapfrog_integration
u, u_t = state
u_tt = scalar_wave_equation(u, c)
u_t = u_t + u_tt * dt
u = u + u_t * dt
return (u, u_t)
# Time stepping
def multi_step(state, count, dt=1/jnp.sqrt(2), c=1):
return lax.fori_loop(0, count, lambda i, s: leapfrog_step(s, dt, c), state)
def multi_step_pmap(state, count, dt=1/jnp.sqrt(2), c=1, exchange_interval=1,
save_interval=1):
def exchange_and_multi_step(state_padded):
c_padded = halo_exchange_padding(c, exchange_interval)
evolved = multi_step(state_padded, exchange_interval, dt, c_padded)
return halo_exchange_inplace(evolved, exchange_interval)
@shard
@partial(jax.pmap, axis_name='x')
def simulate_until_output(state):
stop = save_interval // exchange_interval
state_padded = halo_exchange_padding(state, exchange_interval)
advanced = lax.fori_loop(
0, stop, lambda i, s: exchange_and_multi_step(s), state_padded)
xi = exchange_interval
return tree_util.tree_map(lambda array: array[xi:-xi, ...], advanced)
results = [state]
for _ in range(count // save_interval):
state = simulate_until_output(state)
tree_util.tree_map(lambda x: x.copy_to_host_async(), state)
results.append(state)
results = jax.device_get(results)
return tree_util.tree_multimap(lambda *xs: np.stack([np.array(x) for x in xs]), *results)
multi_step_jit = jax.jit(multi_step)
###Output
_____no_output_____
###Markdown
Initial conditions
###Code
x = jnp.linspace(0, 8, num=8*1024, endpoint=False)
y = jnp.linspace(0, 1, num=1*1024, endpoint=False)
x_mesh, y_mesh = jnp.meshgrid(x, y, indexing='ij')
# NOTE: smooth initial conditions are important, so we aren't exciting
# arbitrarily high frequencies (that cannot be resolved)
u = skimage.filters.gaussian(
((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) < 0.1 ** 2,
sigma=1)
# u = jnp.exp(-((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) / 0.1 ** 2)
# u = skimage.filters.gaussian(
# (x_mesh > 1/3) & (x_mesh < 1/2) & (y_mesh > 1/3) & (y_mesh < 1/2),
# sigma=5)
v = jnp.zeros_like(u)
c = 1 # could also use a 2D array matching the mesh shape
u.shape
###Output
_____no_output_____
###Markdown
Test scaling from 1 to 8 chips
###Code
%%time
# single TPU chip
u_final, _ = multi_step_jit((u, v), count=2**13, c=c, dt=0.5)
%%time
# 8x TPU chips, 4x more steps in roughly half the time!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.5, exchange_interval=4, save_interval=2**15)
18.3 / (10.3 / 4) # near linear scaling (8x would be perfect)
###Output
_____no_output_____
###Markdown
Save a bunch of outputs for a movie
###Code
%%time
# save more outputs for a movie -- this is slow!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.2, exchange_interval=4, save_interval=2**10)
u_final.shape
u_final.nbytes / 1e9
plt.figure(figsize=(18, 6))
plt.axis('off')
plt.imshow(u_final[-1].T, cmap='RdBu');
fig, axes = plt.subplots(9, 1, figsize=(14, 14))
[ax.axis('off') for ax in axes]
axes[0].imshow(u_final[0].T, cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
for i in range(8):
axes[i+1].imshow(u_final[4*i+1].T / abs(u_final[4*i+1]).max(), cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
import matplotlib.cm
import matplotlib.colors
from PIL import Image
def make_images(data, cmap='RdBu', vmax=None):
images = []
for frame in data:
if vmax is None:
this_vmax = np.max(abs(frame))
else:
this_vmax = vmax
norm = matplotlib.colors.Normalize(vmin=-this_vmax, vmax=this_vmax)
mappable = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)
rgba = mappable.to_rgba(frame, bytes=True)
image = Image.fromarray(rgba, mode='RGBA')
images.append(image)
return images
def save_movie(images, path, duration=100, loop=0, **kwargs):
images[0].save(path, save_all=True, append_images=images[1:],
duration=duration, loop=loop, **kwargs)
images = make_images(u_final[::, ::8, ::8].transpose(0, 2, 1))
# Show Movie
proglog.default_bar_logger = partial(proglog.default_bar_logger, None)
ImageSequenceClip([np.array(im) for im in images], fps=25).ipython_display()
# Save GIF.
save_movie(images,'wave_movie.gif', duration=[2000]+[200]*(len(images)-2)+[2000])
# The movie sometimes takes a second before showing up in the file system.
import time; time.sleep(1)
# Download animation.
try:
from google.colab import files
except ImportError:
pass
else:
files.download('wave_movie.gif')
###Output
_____no_output_____
###Markdown
Solving the wave equation on cloud TPUs[_Stephan Hoyer_](https://twitter.com/shoyer)In this notebook, we solve the 2D [wave equation](https://en.wikipedia.org/wiki/Wave_equation):$$\frac{\partial^2 u}{\partial t^2} = c^2 \nabla^2 u$$We use a simple [finite difference](https://en.wikipedia.org/wiki/Finite_difference_method) formulation with [Leapfrog time integration](https://en.wikipedia.org/wiki/Leapfrog_integration).Note: It is natural to express finite difference methods as convolutions, but here we intentionally avoid convolutions in favor of array indexing/arithmetic. This is because "batch" and "feature" dimensions in TPU convolutions are padded to multiples of either 8 and 128, but in our case both these dimensions are effectively of size 1. Setup required environment
###Code
# Grab newest JAX version.
!pip install --upgrade -q jax==0.1.54 jaxlib==0.1.37
# Grab other packages for this demo.
!pip install -U -q Pillow moviepy proglog
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
###Output
_____no_output_____
###Markdown
Simulation code
###Code
from functools import partial
import jax
from jax import jit, pmap
from jax import lax
from jax import tree_util
import jax.numpy as np
import numpy as onp
import matplotlib.pyplot as plt
import skimage.filters
import proglog
from moviepy.editor import ImageSequenceClip
device_count = jax.device_count()
# Spatial partitioning via halo exchange
def send_right(x, axis_name):
# Note: if some devices are omitted from the permutation, lax.ppermute
# provides zeros instead. This gives us an easy way to apply Dirichlet
# boundary conditions.
left_perm = [(i, (i + 1) % device_count) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def send_left(x, axis_name):
left_perm = [((i + 1) % device_count, i) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def axis_slice(ndim, index, axis):
slices = [slice(None)] * ndim
slices[axis] = index
return tuple(slices)
def slice_along_axis(array, index, axis):
return array[axis_slice(array.ndim, index, axis)]
def tree_vectorize(func):
def wrapper(x, *args, **kwargs):
return tree_util.tree_map(lambda x: func(x, *args, **kwargs), x)
return wrapper
@tree_vectorize
def halo_exchange_padding(array, padding=1, axis=0, axis_name='x'):
if not padding > 0:
raise ValueError(f'invalid padding: {padding}')
array = np.array(array)
if array.ndim == 0:
return array
left = slice_along_axis(array, slice(None, padding), axis)
right = slice_along_axis(array, slice(-padding, None), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
return np.concatenate([left, array, right], axis)
@tree_vectorize
def halo_exchange_inplace(array, padding=1, axis=0, axis_name='x'):
left = slice_along_axis(array, slice(padding, 2*padding), axis)
right = slice_along_axis(array, slice(-2*padding, -padding), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
array = jax.ops.index_update(
array, axis_slice(array.ndim, slice(None, padding), axis), left)
array = jax.ops.index_update(
array, axis_slice(array.ndim, slice(-padding, None), axis), right)
return array
# Reshaping inputs/outputs for pmap
def split_with_reshape(array, num_splits, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
tile_size, remainder = divmod(array.shape[split_axis], num_splits)
if remainder:
raise ValueError('num_splits must equally divide the dimension size')
new_shape = list(array.shape)
new_shape[split_axis] = tile_size
new_shape.insert(split_axis, num_splits)
return np.moveaxis(np.reshape(array, new_shape), split_axis, tile_id_axis)
def stack_with_reshape(array, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
array = np.moveaxis(array, tile_id_axis, split_axis)
new_shape = array.shape[:split_axis] + (-1,) + array.shape[split_axis+2:]
return np.reshape(array, new_shape)
def shard(func):
def wrapper(state):
sharded_state = tree_util.tree_map(
lambda x: split_with_reshape(x, device_count), state)
sharded_result = func(sharded_state)
result = tree_util.tree_map(stack_with_reshape, sharded_result)
return result
return wrapper
# Physics
def shift(array, offset, axis):
index = slice(offset, None) if offset >= 0 else slice(None, offset)
sliced = slice_along_axis(array, index, axis)
padding = [(0, 0)] * array.ndim
padding[axis] = (-min(offset, 0), max(offset, 0))
return np.pad(sliced, padding, mode='constant', constant_values=0)
def laplacian(array, step=1):
left = shift(array, +1, axis=0)
right = shift(array, -1, axis=0)
up = shift(array, +1, axis=1)
down = shift(array, -1, axis=1)
convolved = (left + right + up + down - 4 * array)
if step != 1:
convolved *= (1 / step ** 2)
return convolved
def scalar_wave_equation(u, c=1, dx=1):
return c ** 2 * laplacian(u, dx)
@jax.jit
def leapfrog_step(state, dt=0.5, c=1):
# https://en.wikipedia.org/wiki/Leapfrog_integration
u, u_t = state
u_tt = scalar_wave_equation(u, c)
u_t = u_t + u_tt * dt
u = u + u_t * dt
return (u, u_t)
# Time stepping
def multi_step(state, count, dt=1/np.sqrt(2), c=1):
return lax.fori_loop(0, count, lambda i, s: leapfrog_step(s, dt, c), state)
def multi_step_pmap(state, count, dt=1/np.sqrt(2), c=1, exchange_interval=1,
save_interval=1):
def exchange_and_multi_step(state_padded):
c_padded = halo_exchange_padding(c, exchange_interval)
evolved = multi_step(state_padded, exchange_interval, dt, c_padded)
return halo_exchange_inplace(evolved, exchange_interval)
@shard
@partial(jax.pmap, axis_name='x')
def simulate_until_output(state):
stop = save_interval // exchange_interval
state_padded = halo_exchange_padding(state, exchange_interval)
advanced = lax.fori_loop(
0, stop, lambda i, s: exchange_and_multi_step(s), state_padded)
xi = exchange_interval
return tree_util.tree_map(lambda array: array[xi:-xi, ...], advanced)
results = [state]
for _ in range(count // save_interval):
state = simulate_until_output(state)
tree_util.tree_map(lambda x: x.copy_to_host_async(), state)
results.append(state)
results = jax.device_get(results)
return tree_util.tree_multimap(lambda *xs: onp.stack([onp.array(x) for x in xs]), *results)
multi_step_jit = jax.jit(multi_step)
###Output
_____no_output_____
###Markdown
Initial conditions
###Code
x = np.linspace(0, 8, num=8*1024, endpoint=False)
y = np.linspace(0, 1, num=1*1024, endpoint=False)
x_mesh, y_mesh = np.meshgrid(x, y, indexing='ij')
# NOTE: smooth initial conditions are important, so we aren't exciting
# arbitrarily high frequencies (that cannot be resolved)
u = skimage.filters.gaussian(
((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) < 0.1 ** 2,
sigma=1)
# u = np.exp(-((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) / 0.1 ** 2)
# u = skimage.filters.gaussian(
# (x_mesh > 1/3) & (x_mesh < 1/2) & (y_mesh > 1/3) & (y_mesh < 1/2),
# sigma=5)
v = np.zeros_like(u)
c = 1 # could also use a 2D array matching the mesh shape
u.shape
###Output
_____no_output_____
###Markdown
Test scaling from 1 to 8 chips
###Code
%%time
# single TPU chip
u_final, _ = multi_step_jit((u, v), count=2**13, c=c, dt=0.5)
%%time
# 8x TPU chips, 4x more steps in roughly half the time!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.5, exchange_interval=4, save_interval=2**15)
18.3 / (10.3 / 4) # near linear scaling (8x would be perfect)
###Output
_____no_output_____
###Markdown
Save a bunch of outputs for a movie
###Code
%%time
# save more outputs for a movie -- this is slow!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.2, exchange_interval=4, save_interval=2**10)
u_final.shape
u_final.nbytes / 1e9
plt.figure(figsize=(18, 6))
plt.axis('off')
plt.imshow(u_final[-1].T, cmap='RdBu');
fig, axes = plt.subplots(9, 1, figsize=(14, 14))
[ax.axis('off') for ax in axes]
axes[0].imshow(u_final[0].T, cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
for i in range(8):
axes[i+1].imshow(u_final[4*i+1].T / abs(u_final[4*i+1]).max(), cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
import matplotlib.cm
import matplotlib.colors
from PIL import Image
def make_images(data, cmap='RdBu', vmax=None):
images = []
for frame in data:
if vmax is None:
this_vmax = onp.max(abs(frame))
else:
this_vmax = vmax
norm = matplotlib.colors.Normalize(vmin=-this_vmax, vmax=this_vmax)
mappable = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)
rgba = mappable.to_rgba(frame, bytes=True)
image = Image.fromarray(rgba, mode='RGBA')
images.append(image)
return images
def save_movie(images, path, duration=100, loop=0, **kwargs):
images[0].save(path, save_all=True, append_images=images[1:],
duration=duration, loop=loop, **kwargs)
images = make_images(u_final[::, ::8, ::8].transpose(0, 2, 1))
# Show Movie
proglog.default_bar_logger = partial(proglog.default_bar_logger, None)
ImageSequenceClip([onp.array(im) for im in images], fps=25).ipython_display()
# Save GIF.
save_movie(images,'wave_movie.gif', duration=[2000]+[200]*(len(images)-2)+[2000])
# The movie sometimes takes a second before showing up in the file system.
import time; time.sleep(1)
# Download animation.
from google.colab import files
files.download('wave_movie.gif')
###Output
_____no_output_____
###Markdown
Solving the wave equation on cloud TPUs[_Stephan Hoyer_](https://twitter.com/shoyer)In this notebook, we solve the 2D [wave equation](https://en.wikipedia.org/wiki/Wave_equation):$$\frac{\partial^2 u}{\partial t^2} = c^2 \nabla^2 u$$We use a simple [finite difference](https://en.wikipedia.org/wiki/Finite_difference_method) formulation with [Leapfrog time integration](https://en.wikipedia.org/wiki/Leapfrog_integration).Note: It is natural to express finite difference methods as convolutions, but here we intentionally avoid convolutions in favor of array indexing/arithmetic. This is because "batch" and "feature" dimensions in TPU convolutions are padded to multiples of either 8 and 128, but in our case both these dimensions are effectively of size 1. Setup required environment
###Code
# Grab other packages for this demo.
!pip install -U -q Pillow moviepy proglog scikit-image
import jax.tools.colab_tpu
jax.tools.colab_tpu.setup_tpu()
###Output
_____no_output_____
###Markdown
Simulation code
###Code
from functools import partial
import jax
from jax import jit, pmap
from jax import lax
from jax import tree_util
import jax.numpy as jnp
import numpy as np
import matplotlib.pyplot as plt
import skimage.filters
import proglog
from moviepy.editor import ImageSequenceClip
device_count = jax.device_count()
# Spatial partitioning via halo exchange
def send_right(x, axis_name):
# Note: if some devices are omitted from the permutation, lax.ppermute
# provides zeros instead. This gives us an easy way to apply Dirichlet
# boundary conditions.
left_perm = [(i, (i + 1) % device_count) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def send_left(x, axis_name):
left_perm = [((i + 1) % device_count, i) for i in range(device_count - 1)]
return lax.ppermute(x, perm=left_perm, axis_name=axis_name)
def axis_slice(ndim, index, axis):
slices = [slice(None)] * ndim
slices[axis] = index
return tuple(slices)
def slice_along_axis(array, index, axis):
return array[axis_slice(array.ndim, index, axis)]
def tree_vectorize(func):
def wrapper(x, *args, **kwargs):
return tree_util.tree_map(lambda x: func(x, *args, **kwargs), x)
return wrapper
@tree_vectorize
def halo_exchange_padding(array, padding=1, axis=0, axis_name='x'):
if not padding > 0:
raise ValueError(f'invalid padding: {padding}')
array = jnp.array(array)
if array.ndim == 0:
return array
left = slice_along_axis(array, slice(None, padding), axis)
right = slice_along_axis(array, slice(-padding, None), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
return jnp.concatenate([left, array, right], axis)
@tree_vectorize
def halo_exchange_inplace(array, padding=1, axis=0, axis_name='x'):
left = slice_along_axis(array, slice(padding, 2*padding), axis)
right = slice_along_axis(array, slice(-2*padding, -padding), axis)
right, left = send_left(left, axis_name), send_right(right, axis_name)
array = array.at[axis_slice(array.ndim, slice(None, padding), axis)].set(left)
array = array.at[axis_slice(array.ndim, slice(-padding, None), axis)].set(right)
return array
# Reshaping inputs/outputs for pmap
def split_with_reshape(array, num_splits, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
tile_size, remainder = divmod(array.shape[split_axis], num_splits)
if remainder:
raise ValueError('num_splits must equally divide the dimension size')
new_shape = list(array.shape)
new_shape[split_axis] = tile_size
new_shape.insert(split_axis, num_splits)
return jnp.moveaxis(jnp.reshape(array, new_shape), split_axis, tile_id_axis)
def stack_with_reshape(array, *, split_axis=0, tile_id_axis=None):
if tile_id_axis is None:
tile_id_axis = split_axis
array = jnp.moveaxis(array, tile_id_axis, split_axis)
new_shape = array.shape[:split_axis] + (-1,) + array.shape[split_axis+2:]
return jnp.reshape(array, new_shape)
def shard(func):
def wrapper(state):
sharded_state = tree_util.tree_map(
lambda x: split_with_reshape(x, device_count), state)
sharded_result = func(sharded_state)
result = tree_util.tree_map(stack_with_reshape, sharded_result)
return result
return wrapper
# Physics
def shift(array, offset, axis):
index = slice(offset, None) if offset >= 0 else slice(None, offset)
sliced = slice_along_axis(array, index, axis)
padding = [(0, 0)] * array.ndim
padding[axis] = (-min(offset, 0), max(offset, 0))
return jnp.pad(sliced, padding, mode='constant', constant_values=0)
def laplacian(array, step=1):
left = shift(array, +1, axis=0)
right = shift(array, -1, axis=0)
up = shift(array, +1, axis=1)
down = shift(array, -1, axis=1)
convolved = (left + right + up + down - 4 * array)
if step != 1:
convolved *= (1 / step ** 2)
return convolved
def scalar_wave_equation(u, c=1, dx=1):
return c ** 2 * laplacian(u, dx)
@jax.jit
def leapfrog_step(state, dt=0.5, c=1):
# https://en.wikipedia.org/wiki/Leapfrog_integration
u, u_t = state
u_tt = scalar_wave_equation(u, c)
u_t = u_t + u_tt * dt
u = u + u_t * dt
return (u, u_t)
# Time stepping
def multi_step(state, count, dt=1/jnp.sqrt(2), c=1):
return lax.fori_loop(0, count, lambda i, s: leapfrog_step(s, dt, c), state)
def multi_step_pmap(state, count, dt=1/jnp.sqrt(2), c=1, exchange_interval=1,
save_interval=1):
def exchange_and_multi_step(state_padded):
c_padded = halo_exchange_padding(c, exchange_interval)
evolved = multi_step(state_padded, exchange_interval, dt, c_padded)
return halo_exchange_inplace(evolved, exchange_interval)
@shard
@partial(jax.pmap, axis_name='x')
def simulate_until_output(state):
stop = save_interval // exchange_interval
state_padded = halo_exchange_padding(state, exchange_interval)
advanced = lax.fori_loop(
0, stop, lambda i, s: exchange_and_multi_step(s), state_padded)
xi = exchange_interval
return tree_util.tree_map(lambda array: array[xi:-xi, ...], advanced)
results = [state]
for _ in range(count // save_interval):
state = simulate_until_output(state)
tree_util.tree_map(lambda x: x.copy_to_host_async(), state)
results.append(state)
results = jax.device_get(results)
return tree_util.tree_multimap(lambda *xs: np.stack([np.array(x) for x in xs]), *results)
multi_step_jit = jax.jit(multi_step)
###Output
_____no_output_____
###Markdown
Initial conditions
###Code
x = jnp.linspace(0, 8, num=8*1024, endpoint=False)
y = jnp.linspace(0, 1, num=1*1024, endpoint=False)
x_mesh, y_mesh = jnp.meshgrid(x, y, indexing='ij')
# NOTE: smooth initial conditions are important, so we aren't exciting
# arbitrarily high frequencies (that cannot be resolved)
u = skimage.filters.gaussian(
((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) < 0.1 ** 2,
sigma=1)
# u = jnp.exp(-((x_mesh - 1/3) ** 2 + (y_mesh - 1/4) ** 2) / 0.1 ** 2)
# u = skimage.filters.gaussian(
# (x_mesh > 1/3) & (x_mesh < 1/2) & (y_mesh > 1/3) & (y_mesh < 1/2),
# sigma=5)
v = jnp.zeros_like(u)
c = 1 # could also use a 2D array matching the mesh shape
u.shape
###Output
_____no_output_____
###Markdown
Test scaling from 1 to 8 chips
###Code
%%time
# single TPU chip
u_final, _ = multi_step_jit((u, v), count=2**13, c=c, dt=0.5)
%%time
# 8x TPU chips, 4x more steps in roughly half the time!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.5, exchange_interval=4, save_interval=2**15)
18.3 / (10.3 / 4) # near linear scaling (8x would be perfect)
###Output
_____no_output_____
###Markdown
Save a bunch of outputs for a movie
###Code
%%time
# save more outputs for a movie -- this is slow!
u_final, _ = multi_step_pmap(
(u, v), count=2**15, c=c, dt=0.2, exchange_interval=4, save_interval=2**10)
u_final.shape
u_final.nbytes / 1e9
plt.figure(figsize=(18, 6))
plt.axis('off')
plt.imshow(u_final[-1].T, cmap='RdBu');
fig, axes = plt.subplots(9, 1, figsize=(14, 14))
[ax.axis('off') for ax in axes]
axes[0].imshow(u_final[0].T, cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
for i in range(8):
axes[i+1].imshow(u_final[4*i+1].T / abs(u_final[4*i+1]).max(), cmap='RdBu', aspect='equal', vmin=-1, vmax=1)
import matplotlib.cm
import matplotlib.colors
from PIL import Image
def make_images(data, cmap='RdBu', vmax=None):
images = []
for frame in data:
if vmax is None:
this_vmax = np.max(abs(frame))
else:
this_vmax = vmax
norm = matplotlib.colors.Normalize(vmin=-this_vmax, vmax=this_vmax)
mappable = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)
rgba = mappable.to_rgba(frame, bytes=True)
image = Image.fromarray(rgba, mode='RGBA')
images.append(image)
return images
def save_movie(images, path, duration=100, loop=0, **kwargs):
images[0].save(path, save_all=True, append_images=images[1:],
duration=duration, loop=loop, **kwargs)
images = make_images(u_final[::, ::8, ::8].transpose(0, 2, 1))
# Show Movie
proglog.default_bar_logger = partial(proglog.default_bar_logger, None)
ImageSequenceClip([np.array(im) for im in images], fps=25).ipython_display()
# Save GIF.
save_movie(images,'wave_movie.gif', duration=[2000]+[200]*(len(images)-2)+[2000])
# The movie sometimes takes a second before showing up in the file system.
import time; time.sleep(1)
# Download animation.
try:
from google.colab import files
except ImportError:
pass
else:
files.download('wave_movie.gif')
###Output
_____no_output_____ |
docs/ipynb/load.ipynb | ###Markdown
If the data is too large to put in memory all at once, we can load it batch by batch into memory from disk with tf.data.Dataset.This [function](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) can help you build such a tf.data.Dataset for image data.First, we download the data and extract the files.
###Code
import autokeras as ak
import tensorflow as tf
import os
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
local_file_path = tf.keras.utils.get_file(origin=dataset_url,
fname='image_data',
extract=True)
# The file is extracted in the same directory as the downloaded file.
local_dir_path = os.path.dirname(local_file_path)
# After check mannually, we know the extracted data is in 'flower_photos'.
data_dir = os.path.join(local_dir_path, 'flower_photos')
print(data_dir)
###Output
_____no_output_____
###Markdown
The directory should look like this. Each folder contains the images in the same class.```flowers_photos/ daisy/ dandelion/ roses/ sunflowers/ tulips/```We can split the data into training and testing as we load them.
###Code
batch_size = 32
img_height = 180
img_width = 180
train_data = ak.image_dataset_from_directory(
data_dir,
# Use 20% data as testing data.
validation_split=0.2,
subset="training",
# Set seed to ensure the same split when loading testing data.
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
test_data = ak.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Then we just do one quick demo of AutoKeras to make sure the dataset works.
###Code
clf = ak.ImageClassifier(overwrite=True, max_trials=1)
clf.fit(train_data, epochs=1)
print(clf.evaluate(test_data))
###Output
_____no_output_____
###Markdown
You can also load text datasets in the same way.
###Code
dataset_url = "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
local_file_path = tf.keras.utils.get_file(
fname="text_data",
origin=dataset_url,
extract=True,
)
# The file is extracted in the same directory as the downloaded file.
local_dir_path = os.path.dirname(local_file_path)
# After check mannually, we know the extracted data is in 'aclImdb'.
data_dir = os.path.join(local_dir_path, 'aclImdb')
# Remove the unused data folder.
import shutil
shutil.rmtree(os.path.join(data_dir, 'train/unsup'))
###Output
_____no_output_____
###Markdown
For this dataset, the data is already split into train and test.We just load them separately.
###Code
print(data_dir)
train_data = ak.text_dataset_from_directory(
os.path.join(data_dir, 'train'),
batch_size=batch_size)
test_data = ak.text_dataset_from_directory(
os.path.join(data_dir, 'test'),
shuffle=False,
batch_size=batch_size)
clf = ak.TextClassifier(overwrite=True, max_trials=1)
clf.fit(train_data, epochs=2)
print(clf.evaluate(test_data))
###Output
_____no_output_____
###Markdown
If you want to use generators, you can refer to the following code.
###Code
import math
import numpy as np
N_BATCHES = 30
BATCH_SIZE = 100
N_FEATURES = 10
def get_data_generator(n_batches, batch_size, n_features):
"""Get a generator returning n_batches random data of batch_size with n_features."""
def data_generator():
for _ in range(n_batches * batch_size):
x = np.random.randn(n_features)
y = x.sum(axis=0) / n_features > 0.5
yield x, y
return data_generator
dataset = tf.data.Dataset.from_generator(
get_data_generator(N_BATCHES, BATCH_SIZE, N_FEATURES),
output_types=(tf.float32, tf.float32),
output_shapes=((N_FEATURES,), tuple()),
).batch(BATCH_SIZE)
clf = ak.StructuredDataClassifier(overwrite=True, max_trials=1, seed=5)
clf.fit(x=dataset, validation_data=dataset, batch_size=BATCH_SIZE)
print(clf.evaluate(dataset))
###Output
_____no_output_____
###Markdown
If the data is too large to put in memory all at once, we can load it batch by batch into memory from disk with tf.data.Dataset.This [function](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) can help you build such a tf.data.Dataset for image data.First, we download the data and extract the files.
###Code
import autokeras as ak
import tensorflow as tf
import os
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
local_file_path = tf.keras.utils.get_file(origin=dataset_url,
fname='image_data',
extract=True)
# The file is extracted in the same directory as the downloaded file.
local_dir_path = os.path.dirname(local_file_path)
# After check mannually, we know the extracted data is in 'flower_photos'.
data_dir = os.path.join(local_dir_path, 'flower_photos')
print(data_dir)
###Output
_____no_output_____
###Markdown
The directory should look like this. Each folder contains the images in the same class.```flowers_photos/ daisy/ dandelion/ roses/ sunflowers/ tulips/```We can split the data into training and testing as we load them.
###Code
batch_size = 32
img_height = 180
img_width = 180
train_data = ak.image_dataset_from_directory(
data_dir,
# Use 20% data as testing data.
validation_split=0.2,
subset="training",
# Set seed to ensure the same split when loading testing data.
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
test_data = ak.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Then we just do one quick demo of AutoKeras to make sure the dataset works.
###Code
clf = ak.ImageClassifier(overwrite=True, max_trials=1)
clf.fit(train_data, epochs=1)
print(clf.evaluate(test_data))
###Output
_____no_output_____
###Markdown
You can also load text datasets in the same way.
###Code
dataset_url = "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
local_file_path = tf.keras.utils.get_file(
fname="text_data",
origin=dataset_url,
extract=True,
)
# The file is extracted in the same directory as the downloaded file.
local_dir_path = os.path.dirname(local_file_path)
# After check mannually, we know the extracted data is in 'aclImdb'.
data_dir = os.path.join(local_dir_path, 'aclImdb')
# Remove the unused data folder.
import shutil
shutil.rmtree(os.path.join(data_dir, 'train/unsup'))
###Output
_____no_output_____
###Markdown
For this dataset, the data is already split into train and test.We just load them separately.
###Code
print(data_dir)
train_data = ak.text_dataset_from_directory(
os.path.join(data_dir, 'train'),
batch_size=batch_size)
test_data = ak.text_dataset_from_directory(
os.path.join(data_dir, 'test'),
shuffle=False,
batch_size=batch_size)
clf = ak.TextClassifier(overwrite=True, max_trials=1)
clf.fit(train_data, epochs=2)
print(clf.evaluate(test_data))
###Output
_____no_output_____
###Markdown
If you want to use generators, you can refer to the following code.
###Code
import math
import numpy as np
N_BATCHES = 30
BATCH_SIZE = 100
N_FEATURES = 10
def get_data_generator(n_batches, batch_size, n_features):
"""Get a generator returning n_batches random data of batch_size with n_features."""
def data_generator():
for _ in range(n_batches * batch_size):
x = np.random.randn(n_features)
y = x.sum(axis=0) / n_features > 0.5
yield x, y
return data_generator
dataset = tf.data.Dataset.from_generator(
get_data_generator(N_BATCHES, BATCH_SIZE, N_FEATURES),
output_types=(tf.float32, tf.float32),
output_shapes=((N_FEATURES,), tuple()),
).batch(BATCH_SIZE)
clf = ak.StructuredDataClassifier(overwrite=True, max_trials=1, seed=5)
clf.fit(x=dataset, validation_data=dataset, batch_size=BATCH_SIZE)
print(clf.evaluate(dataset))
###Output
_____no_output_____
###Markdown
Load Images from DiskIf the data is too large to put in memory all at once, we can load it batch bybatch into memory from disk with tf.data.Dataset. This[function](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory)can help you build such a tf.data.Dataset for image data.First, we download the data and extract the files.
###Code
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz" # noqa: E501
local_file_path = tf.keras.utils.get_file(
origin=dataset_url, fname="image_data", extract=True
)
# The file is extracted in the same directory as the downloaded file.
local_dir_path = os.path.dirname(local_file_path)
# After check mannually, we know the extracted data is in 'flower_photos'.
data_dir = os.path.join(local_dir_path, "flower_photos")
print(data_dir)
###Output
_____no_output_____
###Markdown
The directory should look like this. Each folder contains the images in thesame class.```flowers_photos/ daisy/ dandelion/ roses/ sunflowers/ tulips/```We can split the data into training and testing as we load them.
###Code
batch_size = 32
img_height = 180
img_width = 180
train_data = ak.image_dataset_from_directory(
data_dir,
# Use 20% data as testing data.
validation_split=0.2,
subset="training",
# Set seed to ensure the same split when loading testing data.
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size,
)
test_data = ak.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size,
)
###Output
_____no_output_____
###Markdown
Then we just do one quick demo of AutoKeras to make sure the dataset works.
###Code
clf = ak.ImageClassifier(overwrite=True, max_trials=1)
clf.fit(train_data, epochs=1)
print(clf.evaluate(test_data))
###Output
_____no_output_____
###Markdown
Load Texts from DiskYou can also load text datasets in the same way.
###Code
dataset_url = "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
local_file_path = tf.keras.utils.get_file(
fname="text_data",
origin=dataset_url,
extract=True,
)
# The file is extracted in the same directory as the downloaded file.
local_dir_path = os.path.dirname(local_file_path)
# After check mannually, we know the extracted data is in 'aclImdb'.
data_dir = os.path.join(local_dir_path, "aclImdb")
# Remove the unused data folder.
shutil.rmtree(os.path.join(data_dir, "train/unsup"))
###Output
_____no_output_____
###Markdown
For this dataset, the data is already split into train and test.We just load them separately.
###Code
print(data_dir)
train_data = ak.text_dataset_from_directory(
os.path.join(data_dir, "train"), batch_size=batch_size
)
test_data = ak.text_dataset_from_directory(
os.path.join(data_dir, "test"), shuffle=False, batch_size=batch_size
)
clf = ak.TextClassifier(overwrite=True, max_trials=1)
clf.fit(train_data, epochs=2)
print(clf.evaluate(test_data))
###Output
_____no_output_____
###Markdown
Load Data with Python GeneratorsIf you want to use generators, you can refer to the following code.
###Code
N_BATCHES = 30
BATCH_SIZE = 100
N_FEATURES = 10
def get_data_generator(n_batches, batch_size, n_features):
"""Get a generator returning n_batches random data.
The shape of the data is (batch_size, n_features).
"""
def data_generator():
for _ in range(n_batches * batch_size):
x = np.random.randn(n_features)
y = x.sum(axis=0) / n_features > 0.5
yield x, y
return data_generator
dataset = tf.data.Dataset.from_generator(
get_data_generator(N_BATCHES, BATCH_SIZE, N_FEATURES),
output_types=(tf.float32, tf.float32),
output_shapes=((N_FEATURES,), tuple()),
).batch(BATCH_SIZE)
clf = ak.StructuredDataClassifier(overwrite=True, max_trials=1, seed=5)
clf.fit(x=dataset, validation_data=dataset, batch_size=BATCH_SIZE)
print(clf.evaluate(dataset))
###Output
_____no_output_____
###Markdown
Load Images from DiskIf the data is too large to put in memory all at once, we can load it batch by batch into memory from disk with tf.data.Dataset.This [function](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) can help you build such a tf.data.Dataset for image data.First, we download the data and extract the files.
###Code
import autokeras as ak
import tensorflow as tf
import os
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
local_file_path = tf.keras.utils.get_file(origin=dataset_url,
fname='image_data',
extract=True)
# The file is extracted in the same directory as the downloaded file.
local_dir_path = os.path.dirname(local_file_path)
# After check mannually, we know the extracted data is in 'flower_photos'.
data_dir = os.path.join(local_dir_path, 'flower_photos')
print(data_dir)
###Output
_____no_output_____
###Markdown
The directory should look like this. Each folder contains the images in the same class.```flowers_photos/ daisy/ dandelion/ roses/ sunflowers/ tulips/```We can split the data into training and testing as we load them.
###Code
batch_size = 32
img_height = 180
img_width = 180
train_data = ak.image_dataset_from_directory(
data_dir,
# Use 20% data as testing data.
validation_split=0.2,
subset="training",
# Set seed to ensure the same split when loading testing data.
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
test_data = ak.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Then we just do one quick demo of AutoKeras to make sure the dataset works.
###Code
clf = ak.ImageClassifier(overwrite=True, max_trials=1)
clf.fit(train_data, epochs=1)
print(clf.evaluate(test_data))
###Output
_____no_output_____
###Markdown
Load Texts from DiskYou can also load text datasets in the same way.
###Code
dataset_url = "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
local_file_path = tf.keras.utils.get_file(
fname="text_data",
origin=dataset_url,
extract=True,
)
# The file is extracted in the same directory as the downloaded file.
local_dir_path = os.path.dirname(local_file_path)
# After check mannually, we know the extracted data is in 'aclImdb'.
data_dir = os.path.join(local_dir_path, 'aclImdb')
# Remove the unused data folder.
import shutil
shutil.rmtree(os.path.join(data_dir, 'train/unsup'))
###Output
_____no_output_____
###Markdown
For this dataset, the data is already split into train and test.We just load them separately.
###Code
print(data_dir)
train_data = ak.text_dataset_from_directory(
os.path.join(data_dir, 'train'),
batch_size=batch_size)
test_data = ak.text_dataset_from_directory(
os.path.join(data_dir, 'test'),
shuffle=False,
batch_size=batch_size)
clf = ak.TextClassifier(overwrite=True, max_trials=1)
clf.fit(train_data, epochs=2)
print(clf.evaluate(test_data))
###Output
_____no_output_____
###Markdown
Load Data with Python GeneratorsIf you want to use generators, you can refer to the following code.
###Code
import math
import numpy as np
N_BATCHES = 30
BATCH_SIZE = 100
N_FEATURES = 10
def get_data_generator(n_batches, batch_size, n_features):
"""Get a generator returning n_batches random data of batch_size with n_features."""
def data_generator():
for _ in range(n_batches * batch_size):
x = np.random.randn(n_features)
y = x.sum(axis=0) / n_features > 0.5
yield x, y
return data_generator
dataset = tf.data.Dataset.from_generator(
get_data_generator(N_BATCHES, BATCH_SIZE, N_FEATURES),
output_types=(tf.float32, tf.float32),
output_shapes=((N_FEATURES,), tuple()),
).batch(BATCH_SIZE)
clf = ak.StructuredDataClassifier(overwrite=True, max_trials=1, seed=5)
clf.fit(x=dataset, validation_data=dataset, batch_size=BATCH_SIZE)
print(clf.evaluate(dataset))
###Output
_____no_output_____
###Markdown
If the data is too large to put in memory all at once, we can load it batch by batch into memory from disk with tf.data.Dataset.This [function](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) can help you build such a tf.data.Dataset for image data.First, we download the data and extract the files.
###Code
import tensorflow as tf
import os
# dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
# local_file_path = tf.keras.utils.get_file(origin=dataset_url,
# fname='image_data',
# extract=True)
# # The file is extracted in the same directory as the downloaded file.
# local_dir_path = os.path.dirname(local_file_path)
# # After check mannually, we know the extracted data is in 'flower_photos'.
# data_dir = os.path.join(local_dir_path, 'flower_photos')
# print(data_dir)
###Output
_____no_output_____
###Markdown
The directory should look like this. Each folder contains the images in the same class.```flowers_photos/ daisy/ dandelion/ roses/ sunflowers/ tulips/```We can split the data into training and testing as we load them.
###Code
batch_size = 32
img_height = 180
img_width = 180
# train_data = tf.keras.preprocessing.image_dataset_from_directory(
# data_dir,
# # Use 20% data as testing data.
# validation_split=0.2,
# subset="training",
# # Set seed to ensure the same split when loading testing data.
# seed=123,
# image_size=(img_height, img_width),
# batch_size=batch_size)
# test_data = tf.keras.preprocessing.image_dataset_from_directory(
# data_dir,
# validation_split=0.2,
# subset="validation",
# seed=123,
# image_size=(img_height, img_width),
# batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Then we just do one quick demo of AutoKeras to make sure the dataset works.
###Code
import autokeras as ak
# clf = ak.ImageClassifier(overwrite=True, max_trials=1)
# clf.fit(train_data, epochs=1)
# print(clf.evaluate(test_data))
###Output
_____no_output_____
###Markdown
You can also load text datasets in the same way.
###Code
dataset_url = "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
local_file_path = tf.keras.utils.get_file(
fname="text_data",
origin=dataset_url,
extract=True,
)
# The file is extracted in the same directory as the downloaded file.
local_dir_path = os.path.dirname(local_file_path)
# After check mannually, we know the extracted data is in 'aclImdb'.
data_dir = os.path.join(local_dir_path, 'aclImdb')
# Remove the unused data folder.
import shutil
shutil.rmtree(os.path.join(data_dir, 'train/unsup'))
###Output
_____no_output_____
###Markdown
For this dataset, the data is already split into train and test.We just load them separately.
###Code
print(data_dir)
train_data = tf.keras.preprocessing.text_dataset_from_directory(
os.path.join(data_dir, 'train'),
class_names=['pos', 'neg'],
validation_split=0.2,
subset="training",
# shuffle=False,
seed=123,
batch_size=batch_size)
val_data = tf.keras.preprocessing.text_dataset_from_directory(
os.path.join(data_dir, 'train'),
class_names=['pos', 'neg'],
validation_split=0.2,
subset="validation",
# shuffle=False,
seed=123,
batch_size=batch_size)
test_data = tf.keras.preprocessing.text_dataset_from_directory(
os.path.join(data_dir, 'test'),
class_names=['pos', 'neg'],
shuffle=False,
batch_size=batch_size)
for x, y in train_data:
print(x.numpy()[0])
print(y.numpy()[0])
# record_x = x.numpy()
# record_y = y.numpy()
break
for x, y in train_data:
print(x.numpy()[0])
print(y.numpy()[0])
break
# train_data = tf.keras.preprocessing.text_dataset_from_directory(
# os.path.join(data_dir, 'train'),
# class_names=['pos', 'neg'],
# shuffle=True,
# seed=123,
# batch_size=batch_size)
# for x, y in train_data:
# for i, a in enumerate(x.numpy()):
# for j, b in enumerate(record_x):
# if a == b:
# print('*')
# assert record_y[j] == y.numpy()[i]
# import numpy as np
# x_train = []
# y_train = []
# for x, y in train_data:
# for a in x.numpy():
# x_train.append(a)
# for a in y.numpy():
# y_train.append(a)
# x_train = np.array(x_train)
# y_train = np.array(y_train)
# train_data = train_data.shuffle(1000, seed=123, reshuffle_each_iteration=False)
clf = ak.TextClassifier(overwrite=True, max_trials=2)
# clf.fit(train_data, validation_data=test_data)
# clf.fit(train_data, validation_data=train_data)
clf.fit(train_data, validation_data=val_data)
# clf.fit(x_train, y_train)
# clf.fit(train_data)
print(clf.evaluate(test_data))
###Output
_____no_output_____
###Markdown
Load Images from DiskIf the data is too large to put in memory all at once, we can load it batch bybatch into memory from disk with tf.data.Dataset. This[function](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory)can help you build such a tf.data.Dataset for image data.First, we download the data and extract the files.
###Code
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz" # noqa: E501
local_file_path = tf.keras.utils.get_file(
origin=dataset_url, fname="image_data", extract=True
)
# The file is extracted in the same directory as the downloaded file.
local_dir_path = os.path.dirname(local_file_path)
# After check mannually, we know the extracted data is in 'flower_photos'.
data_dir = os.path.join(local_dir_path, "flower_photos")
print(data_dir)
###Output
_____no_output_____
###Markdown
The directory should look like this. Each folder contains the images in thesame class.```flowers_photos/ daisy/ dandelion/ roses/ sunflowers/ tulips/```We can split the data into training and testing as we load them.
###Code
batch_size = 32
img_height = 180
img_width = 180
train_data = ak.image_dataset_from_directory(
data_dir,
# Use 20% data as testing data.
validation_split=0.2,
subset="training",
# Set seed to ensure the same split when loading testing data.
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size,
)
test_data = ak.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size,
)
###Output
_____no_output_____
###Markdown
Then we just do one quick demo of AutoKeras to make sure the dataset works.
###Code
clf = ak.ImageClassifier(overwrite=True, max_trials=1)
clf.fit(train_data, epochs=1)
print(clf.evaluate(test_data))
###Output
_____no_output_____
###Markdown
Load Texts from DiskYou can also load text datasets in the same way.
###Code
dataset_url = "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
local_file_path = tf.keras.utils.get_file(
fname="text_data",
origin=dataset_url,
extract=True,
)
# The file is extracted in the same directory as the downloaded file.
local_dir_path = os.path.dirname(local_file_path)
# After check mannually, we know the extracted data is in 'aclImdb'.
data_dir = os.path.join(local_dir_path, "aclImdb")
# Remove the unused data folder.
shutil.rmtree(os.path.join(data_dir, "train/unsup"))
###Output
_____no_output_____
###Markdown
For this dataset, the data is already split into train and test.We just load them separately.
###Code
print(data_dir)
train_data = ak.text_dataset_from_directory(
os.path.join(data_dir, "train"), batch_size=batch_size
)
test_data = ak.text_dataset_from_directory(
os.path.join(data_dir, "test"), shuffle=False, batch_size=batch_size
)
clf = ak.TextClassifier(overwrite=True, max_trials=1)
clf.fit(train_data, epochs=2)
print(clf.evaluate(test_data))
###Output
_____no_output_____
###Markdown
Load Data with Python GeneratorsIf you want to use generators, you can refer to the following code.
###Code
N_BATCHES = 30
BATCH_SIZE = 100
N_FEATURES = 10
def get_data_generator(n_batches, batch_size, n_features):
"""Get a generator returning n_batches random data.
The shape of the data is (batch_size, n_features).
"""
def data_generator():
for _ in range(n_batches * batch_size):
x = np.random.randn(n_features)
y = x.sum(axis=0) / n_features > 0.5
yield x, y
return data_generator
dataset = tf.data.Dataset.from_generator(
get_data_generator(N_BATCHES, BATCH_SIZE, N_FEATURES),
output_types=(tf.float32, tf.float32),
output_shapes=((N_FEATURES,), tuple()),
).batch(BATCH_SIZE)
clf = ak.StructuredDataClassifier(overwrite=True, max_trials=1, seed=5)
clf.fit(x=dataset, validation_data=dataset, batch_size=BATCH_SIZE)
print(clf.evaluate(dataset))
###Output
_____no_output_____ |
visualizations/bokeh/notebooks/glyphs/.ipynb_checkpoints/circle_x-checkpoint.ipynb | ###Markdown
Bokeh Circle X Glyph
###Code
from bokeh.plotting import figure, output_file, show
from bokeh.models import Range1d
from math import radians
fill_color = '#e08214'
line_color = '#fdb863'
output_file("../../figures/glyph-circle-x.html")
p = figure(plot_width=400, plot_height=400)
p.circle_x(x=0,y=0,size=100, fill_alpha=1,fill_color=fill_color,
line_alpha=1, line_color=line_color, line_dash='dashed', line_width=5)
p.circle_x(x=0,y=1,size=100, fill_alpha=0.8, fill_color=fill_color,
line_alpha=1, line_color=line_color, line_dash='dotdash', line_width=8)
p.circle_x(x=1,y=0,size=100, fill_alpha=0.6, fill_color = fill_color,
line_alpha=1, line_color=line_color, line_dash='dotted', line_width=13)
p.circle_x(x=1,y=1,size=100, fill_alpha=0.4, fill_color = fill_color,
line_alpha=1, line_color=line_color, line_dash='solid', line_width=17)
p.x_range = Range1d(-0.5,1.5, bounds=(-1,2))
p.y_range = Range1d(-0.5,1.5, bounds=(-1,2))
show(p)
###Output
_____no_output_____ |
surface_realization.ipynb | ###Markdown
Surface realization
###Code
from surface import grammar
from surface import converter
from surface import utils
from collections import defaultdict
import ast
###Output
_____no_output_____
###Markdown
First we initialize the training and the test file to a variable, the files can be downloaded from the SRST 19 page.
###Code
TRAIN_FILE = "pt_bosque-ud-train.conllu"
TEST_FILE = "pt_bosque-Pred-Stanford.conllu"
###Output
_____no_output_____
###Markdown
Then, we train the two static grammars (the first corresponds to the subgraphs from the ud trees, the second is the fallback grammar, where each rule is binary)Later, the dynamic grammars are generated from these ones.
###Code
grammar.train_subgraphs(TRAIN_FILE, TEST_FILE)
grammar.train_edges(TRAIN_FILE, TEST_FILE)
SUBGRAPH_GRAMMAR_FILE = "train_subgraphs"
EDGE_GRAMMAR_FILE = "train_edges"
###Output
_____no_output_____
###Markdown
We need to extract the graphs from the conll format (conversion from conll to isi), and the rules that use the lin feature.The rules are for incorporating the lin feature, so we can dynamically delete every rule the contradicts the linearity.
###Code
rules, _ = converter.extract_rules(TEST_FILE)
graphs, _, id_graphs= converter.convert(TEST_FILE)
_, sentences, _ = converter.convert(TEST_FILE)
conll = grammar.get_conll_from_file(TEST_FILE)
id_to_parse = {}
stops = []
###Output
_____no_output_____
###Markdown
We run through the sentences and call the alto parser to generate the derivation and map the ud representation to string.The alto can be downloaded from [bitbucket](https://bitbucket.org/tclup/alto/downloads/).
###Code
for sen_id in range(0, len(rules)):
print(sen_id)
try:
grammar_fn = open('dep_grammar_spec.irtg', 'w')
grammar.generate_grammar(SUBGRAPH_GRAMMAR_FILE, rules[sen_id], grammar_fn)
grammar.generate_terminal_ids(conll[sen_id], grammar_fn)
grammar_fn.close()
set_parse("ewt_ones", id_graphs[sen_id])
!timeout 70 java -Xmx32G -cp alto-2.3.6-SNAPSHOT-all.jar de.up.ling.irtg.script.ParsingEvaluator ewt_ones -g dep_grammar_spec.irtg -I ud -O string=toString -o surface_eval_ewt
text_parse, conll_parse = get_parse("surface_eval_ewt", conll[sen_id])
id_to_parse[sen_id] = (text_parse, conll_parse)
except StopIteration:
print("stop iteratioin")
stops.append(sen_id)
continue
###Output
_____no_output_____
###Markdown
We then iterate through the sentences that took too long to parse with the original grammar, and switch to the binary grammar for faster results.
###Code
for sen_id in stops:
grammar_fn = open('dep_grammar_edges.irtg', 'w')
grammar.generate_grammar(EDGE_GRAMMAR_FILE, rules[sen_id], grammar_fn)
grammar.generate_terminal_ids(conll[sen_id], grammar_fn)
grammar_fn.close()
set_parse("ewt_ones", id_graphs[sen_id])
!java -Xmx32G -cp alto-2.3.6-SNAPSHOT-all.jar de.up.ling.irtg.script.ParsingEvaluator ewt_ones -g dep_grammar_edges.irtg -I ud -O string=toString -o surface_eval_ewt
text_parse, conll_parse = get_parse("surface_eval_ewt", conll[sen_id])
id_to_parse[sen_id] = (text_parse, conll_parse)
with open("pt_bosque-Pred-Stanford.conllu" , "w") as f:
for i in id_to_parse:
conll_f = id_to_parse[i][1]
for line in conll_f:
f.write(str(line) + "\t")
f.write("\t".join(conll_f[line]))
f.write("\n")
converter.to_tokenized_output("test-results-inflected/", "tokenized_test_results/")
###Output
_____no_output_____ |
docs/contents/tools/sabueso_UniProtKB_XMLDict/get_tissue_specificity.ipynb | ###Markdown
get tissue specificity
###Code
#from sabueso.tools.string_uniprot import to_uniprotkb_XMLDict
#from sabueso.tools.uniprotkb_XMLDict import get_tissue_specificity
#item = to_uniprotkb_XMLDict('uniprot:P19367')
#item = to_uniprotkb_XMLDict('uniprot:P46200')
#item = to_uniprotkb_XMLDict('uniprot:P55197')
#item = to_uniprotkb_XMLDict('uniprot:P05937')
#item = to_uniprotkb_XMLDict('uniprot:P00374')
#item = to_uniprotkb_XMLDict('uniprot:Q9FFX4')
#tissue_specificity = get_tissue_specificity(item)
#tissue_specificity
###Output
_____no_output_____ |
Jupyter/qgis.ipynb | ###Markdown
Utilização do PyQGIS no Jupyter Webografia* https://lerryws.xyz/posts/PyQGIS-in-Jupyter-Notebook* https://github.com/3liz/qgis-nbextension/blob/master/examples/render_layer.py* https://docs.qgis.org/testing/en/docs/pyqgis_developer_cookbook/ PyQGIS no JupyterO QGIS permite ser utilizado a partir do Python, de acordo com a API [PyQGIS](https://docs.qgis.org/testing/en/docs/pyqgis_developer_cookbook/).Neste notebook faz-se um pequeno exemplo dessa ligação, para se conseguir ter mapas num notebook, que são produzidos pelo QGIS. No fundo vamos ter um QGIS a correr em offline e que nos entrega o mapa como uma sequência de bytes. Ligação ao QGIS (standard)Os paths específicos depende do sistema operativo e da forma como o QGIS foi instalado.No caso geral, bastaria a seguinte inicialização:```pythonfrom osgeo import ogrfrom qgis.core import *from qgis.gui import *from qgis import processingfrom qgis.PyQt.QtGui import QColor, QImagefrom qgis.PyQt.QtCore import QSize, QBuffer, QIODeviceqgs = QgsApplication([], False)qgs.initQgis()```Em Windows, a solução passa por instalar e arrancar o Jupyter no **ambiente Python do QGIS**. Pode-se fazer isso alterando a sccript `python-qgis.bat` em `OSGeo4W64\bin\`, acrescentando no fim:```pip install notebookjupyter notebook --notebook-dir ``` Ligação ao QGIS (não standard)Com o QGIS compilado localmente e instalado em `/usr/local`, como no caso seguinte, é preciso ajustar os caminhos para as bibliotecas Python. O exemplo seguinte é específico a um determinado ambiente, mas serve de inspiração para outros ambientes não standard, em que seja preciso ajustar caminhos.
###Code
import os
import sys
from osgeo import ogr
# os.environ['QT_QPA_PLATFORM'] = 'offscreen'
sys.path.insert(0,'/usr/local/share/qgis/python')
from qgis.core import *
QgsApplication.setPrefixPath("/usr/local", True)
from qgis.gui import *
from qgis import processing
from qgis.PyQt.QtGui import QColor, QImage
from qgis.PyQt.QtCore import QSize, QBuffer, QIODevice
qgs = QgsApplication([], False)
qgs.initQgis()
# print(QgsApplication.showSettings())
###Output
_____no_output_____
###Markdown
Carregar uma camada a partir de uma tabela guardada num geopackageUm geopackage pode conter vários layers. Associado ao layer pode estar associado um estilo predefinido, como no caso seguinte.No exemplo, adiciona-se ao QGIS (que ainda não tem nenhuma camada), a camada `concelho`.
###Code
covid_gpkg = "covid-pt-latest.gpkg" + "|layername=concelho"
concelho = QgsVectorLayer(covid_gpkg, "Concelhos", "ogr")
if not concelho.isValid():
print("Layer failed to load!")
else:
QgsProject.instance().addMapLayer(concelho)
print("Layer loaded")
###Output
Layer loaded
###Markdown
Percorrer as entidades da camada, e mostrar um atributo:
###Code
for c in concelho.getFeatures():
print("Em {} há {} caso(s) confirmados".format(c["concelho"], c["confirmados_concelho_mais_recente"]))
###Output
Em ÁGUEDA há 44 caso(s) confirmados
Em ALBERGARIA-A-VELHA há 72 caso(s) confirmados
Em ANADIA há 36 caso(s) confirmados
Em AROUCA há 31 caso(s) confirmados
Em AVEIRO há 278 caso(s) confirmados
Em CASTELO DE PAIVA há 10 caso(s) confirmados
Em ESPINHO há 69 caso(s) confirmados
Em ESTARREJA há 60 caso(s) confirmados
Em SANTA MARIA DA FEIRA há 387 caso(s) confirmados
Em ÍLHAVO há 108 caso(s) confirmados
Em MEALHADA há 16 caso(s) confirmados
Em MURTOSA há 9 caso(s) confirmados
Em OLIVEIRA DE AZEMÉIS há 163 caso(s) confirmados
Em OLIVEIRA DO BAIRRO há 21 caso(s) confirmados
Em OVAR há 564 caso(s) confirmados
Em SÃO JOÃO DA MADEIRA há 57 caso(s) confirmados
Em SEVER DO VOUGA há 31 caso(s) confirmados
Em VAGOS há 18 caso(s) confirmados
Em VALE DE CAMBRA há 102 caso(s) confirmados
Em ALJUSTREL há NULL caso(s) confirmados
Em ALMODÔVAR há 3 caso(s) confirmados
Em ALVITO há NULL caso(s) confirmados
Em BARRANCOS há NULL caso(s) confirmados
Em BEJA há 9 caso(s) confirmados
Em CASTRO VERDE há NULL caso(s) confirmados
Em CUBA há 3 caso(s) confirmados
Em FERREIRA DO ALENTEJO há NULL caso(s) confirmados
Em MÉRTOLA há NULL caso(s) confirmados
Em MOURA há 39 caso(s) confirmados
Em ODEMIRA há 3 caso(s) confirmados
Em OURIQUE há NULL caso(s) confirmados
Em SERPA há 18 caso(s) confirmados
Em VIDIGUEIRA há NULL caso(s) confirmados
Em AMARES há 45 caso(s) confirmados
Em BARCELOS há 202 caso(s) confirmados
Em BRAGA há 1019 caso(s) confirmados
Em CABECEIRAS DE BASTO há 15 caso(s) confirmados
Em CELORICO DE BASTO há 19 caso(s) confirmados
Em ESPOSENDE há 40 caso(s) confirmados
Em FAFE há 84 caso(s) confirmados
Em GUIMARÃES há 507 caso(s) confirmados
Em PÓVOA DE LANHOSO há 44 caso(s) confirmados
Em TERRAS DE BOURO há 9 caso(s) confirmados
Em VIEIRA DO MINHO há 28 caso(s) confirmados
Em VILA NOVA DE FAMALICÃO há 339 caso(s) confirmados
Em VILA VERDE há 145 caso(s) confirmados
Em VIZELA há 78 caso(s) confirmados
Em ALFÂNDEGA DA FÉ há 4 caso(s) confirmados
Em BRAGANÇA há 101 caso(s) confirmados
Em CARRAZEDA DE ANSIÃES há 7 caso(s) confirmados
Em FREIXO DE ESPADA À CINTA há NULL caso(s) confirmados
Em MACEDO DE CAVALEIROS há 20 caso(s) confirmados
Em MIRANDA DO DOURO há 7 caso(s) confirmados
Em MIRANDELA há 18 caso(s) confirmados
Em MOGADOURO há 3 caso(s) confirmados
Em TORRE DE MONCORVO há 22 caso(s) confirmados
Em VILA FLOR há 5 caso(s) confirmados
Em VIMIOSO há 8 caso(s) confirmados
Em VINHAIS há 26 caso(s) confirmados
Em BELMONTE há NULL caso(s) confirmados
Em CASTELO BRANCO há 5 caso(s) confirmados
Em COVILHÃ há 7 caso(s) confirmados
Em FUNDÃO há 3 caso(s) confirmados
Em IDANHA-A-NOVA há NULL caso(s) confirmados
Em OLEIROS há NULL caso(s) confirmados
Em PENAMACOR há NULL caso(s) confirmados
Em PROENÇA-A-NOVA há NULL caso(s) confirmados
Em SERTÃ há 4 caso(s) confirmados
Em VILA DE REI há NULL caso(s) confirmados
Em VILA VELHA DE RÓDÃO há NULL caso(s) confirmados
Em ARGANIL há 8 caso(s) confirmados
Em CANTANHEDE há 50 caso(s) confirmados
Em COIMBRA há 401 caso(s) confirmados
Em CONDEIXA-A-NOVA há 68 caso(s) confirmados
Em FIGUEIRA DA FOZ há 23 caso(s) confirmados
Em GÓIS há 10 caso(s) confirmados
Em LOUSÃ há 13 caso(s) confirmados
Em MIRA há 4 caso(s) confirmados
Em MIRANDA DO CORVO há 13 caso(s) confirmados
Em MONTEMOR-O-VELHO há 16 caso(s) confirmados
Em OLIVEIRA DO HOSPITAL há 10 caso(s) confirmados
Em PAMPILHOSA DA SERRA há NULL caso(s) confirmados
Em PENACOVA há 16 caso(s) confirmados
Em PENELA há 3 caso(s) confirmados
Em SOURE há 21 caso(s) confirmados
Em TÁBUA há 33 caso(s) confirmados
Em VILA NOVA DE POIARES há 4 caso(s) confirmados
Em ALANDROAL há NULL caso(s) confirmados
Em ARRAIOLOS há NULL caso(s) confirmados
Em BORBA há NULL caso(s) confirmados
Em ESTREMOZ há NULL caso(s) confirmados
Em ÉVORA há 19 caso(s) confirmados
Em MONTEMOR-O-NOVO há 5 caso(s) confirmados
Em MORA há NULL caso(s) confirmados
Em MOURÃO há NULL caso(s) confirmados
Em PORTEL há 3 caso(s) confirmados
Em REDONDO há NULL caso(s) confirmados
Em REGUENGOS DE MONSARAZ há 5 caso(s) confirmados
Em VENDAS NOVAS há 7 caso(s) confirmados
Em VIANA DO ALENTEJO há NULL caso(s) confirmados
Em VILA VIÇOSA há NULL caso(s) confirmados
Em ALBUFEIRA há 69 caso(s) confirmados
Em ALCOUTIM há NULL caso(s) confirmados
Em ALJEZUR há NULL caso(s) confirmados
Em CASTRO MARIM há 3 caso(s) confirmados
Em FARO há 60 caso(s) confirmados
Em LAGOA há 9 caso(s) confirmados
Em LAGOS há 4 caso(s) confirmados
Em LOULÉ há 61 caso(s) confirmados
Em MONCHIQUE há NULL caso(s) confirmados
Em OLHÃO há 15 caso(s) confirmados
Em PORTIMÃO há 35 caso(s) confirmados
Em SÃO BRÁS DE ALPORTEL há NULL caso(s) confirmados
Em SILVES há 21 caso(s) confirmados
Em TAVIRA há 30 caso(s) confirmados
Em VILA DO BISPO há NULL caso(s) confirmados
Em VILA REAL DE SANTO ANTÓNIO há 17 caso(s) confirmados
Em AGUIAR DA BEIRA há NULL caso(s) confirmados
Em ALMEIDA há 6 caso(s) confirmados
Em CELORICO DA BEIRA há 9 caso(s) confirmados
Em FIGUEIRA DE CASTELO RODRIGO há 3 caso(s) confirmados
Em FORNOS DE ALGODRES há NULL caso(s) confirmados
Em GOUVEIA há 19 caso(s) confirmados
Em GUARDA há 20 caso(s) confirmados
Em MANTEIGAS há 3 caso(s) confirmados
Em MÊDA há NULL caso(s) confirmados
Em PINHEL há 23 caso(s) confirmados
Em SABUGAL há NULL caso(s) confirmados
Em SEIA há 10 caso(s) confirmados
Em TRANCOSO há 17 caso(s) confirmados
Em VILA NOVA DE FOZ CÔA há 80 caso(s) confirmados
Em ALCOBAÇA há 27 caso(s) confirmados
Em ALVAIÁZERE há 27 caso(s) confirmados
Em ANSIÃO há 5 caso(s) confirmados
Em BATALHA há 4 caso(s) confirmados
Em BOMBARRAL há 4 caso(s) confirmados
Em CALDAS DA RAINHA há 19 caso(s) confirmados
Em CASTANHEIRA DE PÊRA há NULL caso(s) confirmados
Em FIGUEIRÓ DOS VINHOS há 4 caso(s) confirmados
Em LEIRIA há 64 caso(s) confirmados
Em MARINHA GRANDE há 16 caso(s) confirmados
Em NAZARÉ há NULL caso(s) confirmados
Em ÓBIDOS há NULL caso(s) confirmados
Em PEDRÓGÃO GRANDE há 3 caso(s) confirmados
Em PENICHE há 10 caso(s) confirmados
Em POMBAL há 49 caso(s) confirmados
Em PORTO DE MÓS há 8 caso(s) confirmados
Em ALENQUER há 18 caso(s) confirmados
Em ARRUDA DOS VINHOS há 5 caso(s) confirmados
Em AZAMBUJA há 7 caso(s) confirmados
Em CADAVAL há 5 caso(s) confirmados
Em CASCAIS há 320 caso(s) confirmados
Em LISBOA há 1413 caso(s) confirmados
Em LOURES há 315 caso(s) confirmados
Em LOURINHÃ há 5 caso(s) confirmados
Em MAFRA há 67 caso(s) confirmados
Em OEIRAS há 218 caso(s) confirmados
Em SINTRA há 568 caso(s) confirmados
Em SOBRAL DE MONTE AGRAÇO há NULL caso(s) confirmados
Em TORRES VEDRAS há 31 caso(s) confirmados
Em VILA FRANCA DE XIRA há 160 caso(s) confirmados
Em AMADORA há 273 caso(s) confirmados
Em ODIVELAS há 208 caso(s) confirmados
Em ALTER DO CHÃO há NULL caso(s) confirmados
Em ARRONCHES há NULL caso(s) confirmados
Em AVIS há NULL caso(s) confirmados
Em CAMPO MAIOR há NULL caso(s) confirmados
Em CASTELO DE VIDE há NULL caso(s) confirmados
Em CRATO há NULL caso(s) confirmados
Em ELVAS há 8 caso(s) confirmados
Em FRONTEIRA há NULL caso(s) confirmados
Em GAVIÃO há NULL caso(s) confirmados
Em MARVÃO há NULL caso(s) confirmados
Em MONFORTE há NULL caso(s) confirmados
Em NISA há NULL caso(s) confirmados
Em PONTE DE SOR há NULL caso(s) confirmados
Em PORTALEGRE há 6 caso(s) confirmados
Em SOUSEL há NULL caso(s) confirmados
Em AMARANTE há 81 caso(s) confirmados
Em BAIÃO há 13 caso(s) confirmados
Em FELGUEIRAS há 308 caso(s) confirmados
Em GONDOMAR há 966 caso(s) confirmados
Em LOUSADA há 174 caso(s) confirmados
Em MAIA há 826 caso(s) confirmados
Em MARCO DE CANAVESES há 63 caso(s) confirmados
Em MATOSINHOS há 1017 caso(s) confirmados
Em PAÇOS DE FERREIRA há 238 caso(s) confirmados
Em PAREDES há 274 caso(s) confirmados
Em PENAFIEL há 143 caso(s) confirmados
Em PORTO há 1211 caso(s) confirmados
Em PÓVOA DE VARZIM há 116 caso(s) confirmados
Em SANTO TIRSO há 308 caso(s) confirmados
Em VALONGO há 700 caso(s) confirmados
Em VILA DO CONDE há 235 caso(s) confirmados
Em VILA NOVA DE GAIA há 1263 caso(s) confirmados
Em TROFA há 129 caso(s) confirmados
Em ABRANTES há 8 caso(s) confirmados
Em ALCANENA há 7 caso(s) confirmados
Em ALMEIRIM há 14 caso(s) confirmados
Em ALPIARÇA há 9 caso(s) confirmados
Em BENAVENTE há 29 caso(s) confirmados
Em CARTAXO há 23 caso(s) confirmados
Em CHAMUSCA há 9 caso(s) confirmados
Em CONSTÂNCIA há NULL caso(s) confirmados
Em CORUCHE há 36 caso(s) confirmados
Em ENTRONCAMENTO há 4 caso(s) confirmados
Em FERREIRA DO ZÊZERE há NULL caso(s) confirmados
Em GOLEGÃ há NULL caso(s) confirmados
Em MAÇÃO há NULL caso(s) confirmados
Em RIO MAIOR há 13 caso(s) confirmados
Em SALVATERRA DE MAGOS há 8 caso(s) confirmados
Em SANTARÉM há 73 caso(s) confirmados
Em SARDOAL há NULL caso(s) confirmados
Em TOMAR há 11 caso(s) confirmados
Em TORRES NOVAS há 11 caso(s) confirmados
Em VILA NOVA DA BARQUINHA há 3 caso(s) confirmados
Em OURÉM há 29 caso(s) confirmados
Em ALCÁCER DO SAL há 4 caso(s) confirmados
Em ALCOCHETE há 14 caso(s) confirmados
Em ALMADA há 231 caso(s) confirmados
Em BARREIRO há 89 caso(s) confirmados
Em GRÂNDOLA há 7 caso(s) confirmados
Em MOITA há 61 caso(s) confirmados
Em MONTIJO há 44 caso(s) confirmados
Em PALMELA há 16 caso(s) confirmados
Em SANTIAGO DO CACÉM há 14 caso(s) confirmados
Em SEIXAL há 163 caso(s) confirmados
Em SESIMBRA há 20 caso(s) confirmados
Em SETÚBAL há 59 caso(s) confirmados
Em SINES há NULL caso(s) confirmados
Em ARCOS DE VALDEVEZ há 61 caso(s) confirmados
Em CAMINHA há 14 caso(s) confirmados
Em MELGAÇO há 38 caso(s) confirmados
Em MONÇÃO há 68 caso(s) confirmados
Em PAREDES DE COURA há 7 caso(s) confirmados
Em PONTE DA BARCA há 7 caso(s) confirmados
Em PONTE DE LIMA há 24 caso(s) confirmados
Em VALENÇA há 7 caso(s) confirmados
Em VIANA DO CASTELO há 144 caso(s) confirmados
Em VILA NOVA DE CERVEIRA há 6 caso(s) confirmados
Em ALIJÓ há 3 caso(s) confirmados
Em BOTICAS há NULL caso(s) confirmados
Em CHAVES há 25 caso(s) confirmados
Em MESÃO FRIO há NULL caso(s) confirmados
Em MONDIM DE BASTO há NULL caso(s) confirmados
Em MONTALEGRE há 3 caso(s) confirmados
Em MURÇA há 12 caso(s) confirmados
Em PESO DA RÉGUA há 52 caso(s) confirmados
Em RIBEIRA DE PENA há 3 caso(s) confirmados
Em SABROSA há 7 caso(s) confirmados
Em SANTA MARTA DE PENAGUIÃO há NULL caso(s) confirmados
Em VALPAÇOS há 6 caso(s) confirmados
Em VILA POUCA DE AGUIAR há 3 caso(s) confirmados
Em VILA REAL há 151 caso(s) confirmados
Em ARMAMAR há NULL caso(s) confirmados
Em CARREGAL DO SAL há 12 caso(s) confirmados
Em CASTRO DAIRE há 104 caso(s) confirmados
Em CINFÃES há 10 caso(s) confirmados
Em LAMEGO há 33 caso(s) confirmados
Em MANGUALDE há 70 caso(s) confirmados
Em MOIMENTA DA BEIRA há 11 caso(s) confirmados
Em MORTÁGUA há 8 caso(s) confirmados
Em NELAS há 14 caso(s) confirmados
Em OLIVEIRA DE FRADES há 8 caso(s) confirmados
Em PENALVA DO CASTELO há NULL caso(s) confirmados
Em PENEDONO há NULL caso(s) confirmados
Em RESENDE há 67 caso(s) confirmados
Em SANTA COMBA DÃO há 9 caso(s) confirmados
Em SÃO JOÃO DA PESQUEIRA há NULL caso(s) confirmados
Em SÃO PEDRO DO SUL há 8 caso(s) confirmados
Em SÁTÃO há 7 caso(s) confirmados
Em SERNANCELHE há NULL caso(s) confirmados
Em TABUAÇO há NULL caso(s) confirmados
Em TAROUCA há NULL caso(s) confirmados
Em TONDELA há 13 caso(s) confirmados
Em VILA NOVA DE PAIVA há NULL caso(s) confirmados
Em VISEU há 83 caso(s) confirmados
Em VOUZELA há 7 caso(s) confirmados
###Markdown
Em vez de percorrer toda a camada, pode-se criar um filtro sobre a camada.
###Code
expr_sem_casos = QgsExpression( " \"confirmados_concelho_mais_recente\" IS NULL " )
virgens = list(concelho.getFeatures( QgsFeatureRequest( expr_sem_casos ) ))
for c in virgens:
print("Em {} não há pelos menos 3 casos confirmados".format(c["concelho"]))
###Output
Em ALJUSTREL não há pelos menos 3 casos confirmados
Em ALVITO não há pelos menos 3 casos confirmados
Em BARRANCOS não há pelos menos 3 casos confirmados
Em CASTRO VERDE não há pelos menos 3 casos confirmados
Em FERREIRA DO ALENTEJO não há pelos menos 3 casos confirmados
Em MÉRTOLA não há pelos menos 3 casos confirmados
Em OURIQUE não há pelos menos 3 casos confirmados
Em VIDIGUEIRA não há pelos menos 3 casos confirmados
Em FREIXO DE ESPADA À CINTA não há pelos menos 3 casos confirmados
Em BELMONTE não há pelos menos 3 casos confirmados
Em IDANHA-A-NOVA não há pelos menos 3 casos confirmados
Em OLEIROS não há pelos menos 3 casos confirmados
Em PENAMACOR não há pelos menos 3 casos confirmados
Em PROENÇA-A-NOVA não há pelos menos 3 casos confirmados
Em VILA DE REI não há pelos menos 3 casos confirmados
Em VILA VELHA DE RÓDÃO não há pelos menos 3 casos confirmados
Em PAMPILHOSA DA SERRA não há pelos menos 3 casos confirmados
Em ALANDROAL não há pelos menos 3 casos confirmados
Em ARRAIOLOS não há pelos menos 3 casos confirmados
Em BORBA não há pelos menos 3 casos confirmados
Em ESTREMOZ não há pelos menos 3 casos confirmados
Em MORA não há pelos menos 3 casos confirmados
Em MOURÃO não há pelos menos 3 casos confirmados
Em REDONDO não há pelos menos 3 casos confirmados
Em VIANA DO ALENTEJO não há pelos menos 3 casos confirmados
Em VILA VIÇOSA não há pelos menos 3 casos confirmados
Em ALCOUTIM não há pelos menos 3 casos confirmados
Em ALJEZUR não há pelos menos 3 casos confirmados
Em MONCHIQUE não há pelos menos 3 casos confirmados
Em SÃO BRÁS DE ALPORTEL não há pelos menos 3 casos confirmados
Em VILA DO BISPO não há pelos menos 3 casos confirmados
Em AGUIAR DA BEIRA não há pelos menos 3 casos confirmados
Em FORNOS DE ALGODRES não há pelos menos 3 casos confirmados
Em MÊDA não há pelos menos 3 casos confirmados
Em SABUGAL não há pelos menos 3 casos confirmados
Em CASTANHEIRA DE PÊRA não há pelos menos 3 casos confirmados
Em NAZARÉ não há pelos menos 3 casos confirmados
Em ÓBIDOS não há pelos menos 3 casos confirmados
Em SOBRAL DE MONTE AGRAÇO não há pelos menos 3 casos confirmados
Em ALTER DO CHÃO não há pelos menos 3 casos confirmados
Em ARRONCHES não há pelos menos 3 casos confirmados
Em AVIS não há pelos menos 3 casos confirmados
Em CAMPO MAIOR não há pelos menos 3 casos confirmados
Em CASTELO DE VIDE não há pelos menos 3 casos confirmados
Em CRATO não há pelos menos 3 casos confirmados
Em FRONTEIRA não há pelos menos 3 casos confirmados
Em GAVIÃO não há pelos menos 3 casos confirmados
Em MARVÃO não há pelos menos 3 casos confirmados
Em MONFORTE não há pelos menos 3 casos confirmados
Em NISA não há pelos menos 3 casos confirmados
Em PONTE DE SOR não há pelos menos 3 casos confirmados
Em SOUSEL não há pelos menos 3 casos confirmados
Em CONSTÂNCIA não há pelos menos 3 casos confirmados
Em FERREIRA DO ZÊZERE não há pelos menos 3 casos confirmados
Em GOLEGÃ não há pelos menos 3 casos confirmados
Em MAÇÃO não há pelos menos 3 casos confirmados
Em SARDOAL não há pelos menos 3 casos confirmados
Em SINES não há pelos menos 3 casos confirmados
Em BOTICAS não há pelos menos 3 casos confirmados
Em MESÃO FRIO não há pelos menos 3 casos confirmados
Em MONDIM DE BASTO não há pelos menos 3 casos confirmados
Em SANTA MARTA DE PENAGUIÃO não há pelos menos 3 casos confirmados
Em ARMAMAR não há pelos menos 3 casos confirmados
Em PENALVA DO CASTELO não há pelos menos 3 casos confirmados
Em PENEDONO não há pelos menos 3 casos confirmados
Em SÃO JOÃO DA PESQUEIRA não há pelos menos 3 casos confirmados
Em SERNANCELHE não há pelos menos 3 casos confirmados
Em TABUAÇO não há pelos menos 3 casos confirmados
Em TAROUCA não há pelos menos 3 casos confirmados
Em VILA NOVA DE PAIVA não há pelos menos 3 casos confirmados
###Markdown
A geração do mapa tem algumas questões técnicas. O melhor seria encapsular estas questões numa função. Fica o desafio.
###Code
xt = concelho.extent()
# print(xt)
width = 200
height = int(width*xt.height()/xt.width())
print("Gerar mapa com {} por {}".format(width, height))
options = QgsMapSettings()
options.setLayers([concelho])
options.setBackgroundColor(QColor(255, 255, 255))
options.setOutputSize(QSize(width, height))
options.setExtent(xt)
render = QgsMapRendererParallelJob(options)
render.start()
render.waitForFinished()
image = render.renderedImage()
from IPython.display import Image
imgbuf= QBuffer()
imgbuf.open( QIODevice.ReadWrite )
image.save( imgbuf,"PNG" )
Image( imgbuf.data() )
###Output
_____no_output_____
###Markdown
Se se quizer fechar a instância do QGIS que está a correr, termina-se com:
###Code
qgs.exitQgis()
###Output
_____no_output_____ |
Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization/week5/Initialization/Initialization.ipynb | ###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[0. 0. 0.]
[0. 0. 0.]]
b1 = [[0.]
[0.]]
W2 = [[0. 0.]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1])*10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[0.]
[0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
C:\Users\abdur\Desktop\DL\DL_Course2\week5\Initialization\init_utils.py:145: RuntimeWarning: divide by zero encountered in log
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
C:\Users\abdur\Desktop\DL\DL_Course2\week5\Initialization\init_utils.py:145: RuntimeWarning: invalid value encountered in multiply
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
import math
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1])*math.sqrt(2./layers_dims[l-1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))*math.sqrt(2./layers_dims[l-1])
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[0.]
[0.]
[0.]
[0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____ |
docs/tutorials/rv-multi.ipynb | ###Markdown
(rv-multi)= RVs with multiple instruments
###Code
import exoplanet
exoplanet.utils.docs_setup()
print(f"exoplanet.__version__ = '{exoplanet.__version__}'")
###Output
_____no_output_____
###Markdown
In this case study, we will look at how we can use exoplanet and PyMC3 to combine datasets from different RV instruments to fit the orbit of an exoplanet system.Before getting started, I want to emphasize that the exoplanet code doesn't have strong opinions about how your data are collected, it only provides extensions that allow PyMC3 to evaluate some astronomy-specific functions.This means that you can build any kind of observation model that PyMC3 supports, and support for multiple instruments isn't really a *feature* of exoplanet, even though it is easy to implement.For the example, we'll use public observations of Pi Mensae which hosts two planets, but we'll ignore the inner planet because the significance of the RV signal is small enough that it won't affect our results.The datasets that we'll use are from the Anglo-Australian Planet Search (AAT) and the HARPS archive.As is commonly done, we will treat the HARPS observations as two independent datasets split in June 2015 when the HARPS hardware was upgraded.Therefore, we'll consider three datasets that we will allow to have different instrumental parameters (RV offset and jitter), but shared orbital parameters and stellar variability.In some cases you might also want to have a different astrophyscial variability model for each instrument (if, for example, the observations are made in very different bands), but we'll keep things simple for this example.The AAT data are available from [The Exoplanet Archive](https://exoplanetarchive.ipac.caltech.edu/) and the HARPS observations can be downloaded from the [ESO Archive](http://archive.eso.org/wdb/wdb/adp/phase3_spectral/form).For the sake of simplicity, we have extracted the HARPS RVs from the archive in advance using [Megan Bedell's harps_tools library](https://github.com/megbedell/harps_tools).To start, download the data and plot them with a (very!) rough zero point correction.
###Code
import numpy as np
import pandas as pd
from astropy.io import ascii
import matplotlib.pyplot as plt
aat = ascii.read(
"https://exoplanetarchive.ipac.caltech.edu/data/ExoData/0026/0026394/data/UID_0026394_RVC_001.tbl"
)
harps = pd.read_csv(
"https://raw.githubusercontent.com/exoplanet-dev/case-studies/main/data/pi_men_harps_rvs.csv",
skiprows=1,
)
harps = harps.rename(lambda x: x.strip().strip("#"), axis=1)
harps_post = np.array(harps.date > "2015-07-01", dtype=int)
t = np.concatenate((aat["JD"], harps["bjd"]))
rv = np.concatenate((aat["Radial_Velocity"], harps["rv"]))
rv_err = np.concatenate((aat["Radial_Velocity_Uncertainty"], harps["e_rv"]))
inst_id = np.concatenate((np.zeros(len(aat), dtype=int), harps_post + 1))
inds = np.argsort(t)
t = np.ascontiguousarray(t[inds], dtype=float)
rv = np.ascontiguousarray(rv[inds], dtype=float)
rv_err = np.ascontiguousarray(rv_err[inds], dtype=float)
inst_id = np.ascontiguousarray(inst_id[inds], dtype=int)
inst_names = ["aat", "harps_pre", "harps_post"]
num_inst = len(inst_names)
for i, name in enumerate(inst_names):
m = inst_id == i
plt.errorbar(
t[m], rv[m] - np.min(rv[m]), yerr=rv_err[m], fmt=".", label=name
)
plt.legend(fontsize=10)
plt.xlabel("BJD")
_ = plt.ylabel("radial velocity [m/s]")
###Output
_____no_output_____
###Markdown
Then set up the probabilistic model.Most of this is similar to the model in the [Radial velocity fitting](https://docs.exoplanet.codes/en/stable/tutorials/rv/) tutorial, but there are a few changes to highlight:1. Instead of a polynomial model for trends, stellar variability, and inner planets, we're using a Gaussian process here. This won't have a big effect here, but more careful consideration should be performed when studying lower signal-to-noise systems.2. There are three radial velocity offsets and three jitter parameters (one for each instrument) that will be treated independently. This is the key addition made by this case study.
###Code
import pymc3 as pm
import exoplanet as xo
import aesara_theano_fallback.tensor as tt
import pymc3_ext as pmx
from celerite2.theano import terms, GaussianProcess
t_phase = np.linspace(-0.5, 0.5, 5000)
with pm.Model() as model:
# Parameters describing the orbit
log_K = pm.Normal("log_K", mu=np.log(300), sigma=10)
log_P = pm.Normal("log_P", mu=np.log(2093.07), sigma=10)
K = pm.Deterministic("K", tt.exp(log_K))
P = pm.Deterministic("P", tt.exp(log_P))
ecs = pmx.UnitDisk("ecs", testval=np.array([0.7, -0.3]))
ecc = pm.Deterministic("ecc", tt.sum(ecs ** 2))
omega = pm.Deterministic("omega", tt.arctan2(ecs[1], ecs[0]))
phase = pmx.UnitUniform("phase")
tp = pm.Deterministic("tp", 0.5 * (t.min() + t.max()) + phase * P)
orbit = xo.orbits.KeplerianOrbit(
period=P, t_periastron=tp, ecc=ecc, omega=omega
)
# Noise model parameters
log_sigma_gp = pm.Normal("log_sigma_gp", mu=np.log(10), sigma=50)
log_rho_gp = pm.Normal("log_rho_gp", mu=np.log(50), sigma=50)
# Per instrument parameters
means = pm.Normal(
"means",
mu=np.array([np.median(rv[inst_id == i]) for i in range(num_inst)]),
sigma=200,
shape=num_inst,
)
sigmas = pm.HalfNormal("sigmas", sigma=10, shape=num_inst)
# Compute the RV offset and jitter for each data point depending on its instrument
mean = tt.zeros(len(t))
diag = tt.zeros(len(t))
for i in range(len(inst_names)):
mean += means[i] * (inst_id == i)
diag += (rv_err ** 2 + sigmas[i] ** 2) * (inst_id == i)
pm.Deterministic("mean", mean)
pm.Deterministic("diag", diag)
resid = rv - mean
def rv_model(x):
return orbit.get_radial_velocity(x, K=K)
kernel = terms.SHOTerm(
sigma=tt.exp(log_sigma_gp), rho=tt.exp(log_rho_gp), Q=1.0 / 3
)
gp = GaussianProcess(kernel, t=t, diag=diag, mean=rv_model)
gp.marginal("obs", observed=resid)
pm.Deterministic("gp_pred", gp.predict(resid, include_mean=False))
pm.Deterministic("rv_phase", rv_model(P * t_phase + tp))
map_soln = model.test_point
map_soln = pmx.optimize(map_soln, [means])
map_soln = pmx.optimize(map_soln, [means, phase])
map_soln = pmx.optimize(map_soln, [means, phase, log_K])
map_soln = pmx.optimize(map_soln, [means, tp, K, log_P, ecs])
map_soln = pmx.optimize(map_soln, [sigmas, log_sigma_gp, log_rho_gp])
map_soln = pmx.optimize(map_soln)
###Output
_____no_output_____
###Markdown
After fitting for the parameters that maximize the posterior probability, we can plot this model to make sure that things are looking reasonable:
###Code
t_pred = np.linspace(t.min() - 400, t.max() + 400, 5000)
with model:
plt.plot(
t_pred, pmx.eval_in_model(rv_model(t_pred), map_soln), "k", lw=0.5
)
detrended = rv - map_soln["mean"] - map_soln["gp_pred"]
plt.errorbar(t, detrended, yerr=rv_err, fmt=",k")
plt.scatter(
t, detrended, c=inst_id, s=8, zorder=100, cmap="tab10", vmin=0, vmax=10
)
plt.xlim(t_pred.min(), t_pred.max())
plt.xlabel("BJD")
plt.ylabel("radial velocity [m/s]")
_ = plt.title("map model", fontsize=14)
###Output
_____no_output_____
###Markdown
That looks fine, so now we can run the MCMC sampler:
###Code
with model:
trace = pmx.sample(
tune=1000,
draws=1000,
start=map_soln,
chains=2,
cores=2,
return_inferencedata=True,
random_seed=[39091, 39095],
)
###Output
_____no_output_____
###Markdown
Then we can look at some summaries of the trace and the constraints on some of the key parameters:
###Code
import corner
import arviz as az
corner.corner(trace, var_names=["P", "K", "tp", "ecc", "omega"])
az.summary(
trace, var_names=["P", "K", "tp", "ecc", "omega", "means", "sigmas"]
)
###Output
_____no_output_____
###Markdown
And finally we can plot the phased RV curve and overplot our posterior inference:
###Code
flat_samps = trace.posterior.stack(sample=("chain", "draw"))
mu = np.mean(flat_samps["mean"].values + flat_samps["gp_pred"].values, axis=-1)
mu_var = np.var(flat_samps["mean"], axis=-1)
jitter_var = np.median(flat_samps["diag"], axis=-1)
period = np.median(flat_samps["P"])
tp = np.median(flat_samps["tp"])
detrended = rv - mu
folded = ((t - tp + 0.5 * period) % period) / period
plt.errorbar(folded, detrended, yerr=np.sqrt(mu_var + jitter_var), fmt=",k")
plt.scatter(
folded,
detrended,
c=inst_id,
s=8,
zorder=100,
cmap="tab10",
vmin=0,
vmax=10,
)
plt.errorbar(
folded + 1, detrended, yerr=np.sqrt(mu_var + jitter_var), fmt=",k"
)
plt.scatter(
folded + 1,
detrended,
c=inst_id,
s=8,
zorder=100,
cmap="tab10",
vmin=0,
vmax=10,
)
x = t_phase + 0.5
y = np.mean(flat_samps["rv_phase"], axis=-1)
plt.plot(x, y, "k", lw=0.5, alpha=0.5)
plt.plot(x + 1, y, "k", lw=0.5, alpha=0.5)
plt.axvline(1, color="k", lw=0.5)
plt.xlim(0, 2)
plt.xlabel("phase")
plt.ylabel("radial velocity [m/s]")
_ = plt.title("posterior inference", fontsize=14)
###Output
_____no_output_____
###Markdown
CitationsAs described in the [citation tutorial](https://docs.exoplanet.codes/en/stable/tutorials/citation/), we can use [citations.get_citations_for_model](https://docs.exoplanet.codes/en/stable/user/api/exoplanet.citations.get_citations_for_model) to construct an acknowledgement and BibTeX listing that includes the relevant citations for this model.
###Code
with model:
txt, bib = xo.citations.get_citations_for_model()
print(txt)
print(bib.split("\n\n")[0] + "\n\n...")
###Output
_____no_output_____
###Markdown
(rv-multi)= RVs with multiple instruments
###Code
import exoplanet
exoplanet.utils.docs_setup()
print(f"exoplanet.__version__ = '{exoplanet.__version__}'")
###Output
_____no_output_____
###Markdown
In this case study, we will look at how we can use exoplanet and PyMC3 to combine datasets from different RV instruments to fit the orbit of an exoplanet system.Before getting started, I want to emphasize that the exoplanet code doesn't have strong opinions about how your data are collected, it only provides extensions that allow PyMC3 to evaluate some astronomy-specific functions.This means that you can build any kind of observation model that PyMC3 supports, and support for multiple instruments isn't really a *feature* of exoplanet, even though it is easy to implement.For the example, we'll use public observations of Pi Mensae which hosts two planets, but we'll ignore the inner planet because the significance of the RV signal is small enough that it won't affect our results.The datasets that we'll use are from the Anglo-Australian Planet Search (AAT) and the HARPS archive.As is commonly done, we will treat the HARPS observations as two independent datasets split in June 2015 when the HARPS hardware was upgraded.Therefore, we'll consider three datasets that we will allow to have different instrumental parameters (RV offset and jitter), but shared orbital parameters and stellar variability.In some cases you might also want to have a different astrophyscial variability model for each instrument (if, for example, the observations are made in very different bands), but we'll keep things simple for this example.The AAT data are available from [The Exoplanet Archive](https://exoplanetarchive.ipac.caltech.edu/) and the HARPS observations can be downloaded from the [ESO Archive](http://archive.eso.org/wdb/wdb/adp/phase3_spectral/form).For the sake of simplicity, we have extracted the HARPS RVs from the archive in advance using [Megan Bedell's harps_tools library](https://github.com/megbedell/harps_tools).To start, download the data and plot them with a (very!) rough zero point correction.
###Code
import numpy as np
import pandas as pd
from astropy.io import ascii
import matplotlib.pyplot as plt
aat = ascii.read(
"https://exoplanetarchive.ipac.caltech.edu/data/ExoData/0026/0026394/data/UID_0026394_RVC_001.tbl"
)
harps = pd.read_csv(
"https://raw.githubusercontent.com/exoplanet-dev/case-studies/main/data/pi_men_harps_rvs.csv",
skiprows=1,
)
harps = harps.rename(lambda x: x.strip().strip("#"), axis=1)
harps_post = np.array(harps.date > "2015-07-01", dtype=int)
t = np.concatenate((aat["JD"], harps["bjd"]))
rv = np.concatenate((aat["Radial_Velocity"], harps["rv"]))
rv_err = np.concatenate((aat["Radial_Velocity_Uncertainty"], harps["e_rv"]))
inst_id = np.concatenate((np.zeros(len(aat), dtype=int), harps_post + 1))
inds = np.argsort(t)
t = np.ascontiguousarray(t[inds], dtype=float)
rv = np.ascontiguousarray(rv[inds], dtype=float)
rv_err = np.ascontiguousarray(rv_err[inds], dtype=float)
inst_id = np.ascontiguousarray(inst_id[inds], dtype=int)
inst_names = ["aat", "harps_pre", "harps_post"]
num_inst = len(inst_names)
for i, name in enumerate(inst_names):
m = inst_id == i
plt.errorbar(
t[m], rv[m] - np.min(rv[m]), yerr=rv_err[m], fmt=".", label=name
)
plt.legend(fontsize=10)
plt.xlabel("BJD")
_ = plt.ylabel("radial velocity [m/s]")
###Output
_____no_output_____
###Markdown
Then set up the probabilistic model.Most of this is similar to the model in the [Radial velocity fitting](https://gallery.exoplanet.codes/tutorials/rv/) tutorial, but there are a few changes to highlight:1. Instead of a polynomial model for trends, stellar variability, and inner planets, we're using a Gaussian process here. This won't have a big effect here, but more careful consideration should be performed when studying lower signal-to-noise systems.2. There are three radial velocity offsets and three jitter parameters (one for each instrument) that will be treated independently. This is the key addition made by this case study.
###Code
import pymc3 as pm
import exoplanet as xo
import aesara_theano_fallback.tensor as tt
import pymc3_ext as pmx
from celerite2.theano import terms, GaussianProcess
t_phase = np.linspace(-0.5, 0.5, 5000)
with pm.Model() as model:
# Parameters describing the orbit
log_K = pm.Normal("log_K", mu=np.log(300), sigma=10)
log_P = pm.Normal("log_P", mu=np.log(2093.07), sigma=10)
K = pm.Deterministic("K", tt.exp(log_K))
P = pm.Deterministic("P", tt.exp(log_P))
ecs = pmx.UnitDisk("ecs", testval=np.array([0.7, -0.3]))
ecc = pm.Deterministic("ecc", tt.sum(ecs**2))
omega = pm.Deterministic("omega", tt.arctan2(ecs[1], ecs[0]))
phase = pmx.UnitUniform("phase")
tp = pm.Deterministic("tp", 0.5 * (t.min() + t.max()) + phase * P)
orbit = xo.orbits.KeplerianOrbit(
period=P, t_periastron=tp, ecc=ecc, omega=omega
)
# Noise model parameters
log_sigma_gp = pm.Normal("log_sigma_gp", mu=np.log(10), sigma=50)
log_rho_gp = pm.Normal("log_rho_gp", mu=np.log(50), sigma=50)
# Per instrument parameters
means = pm.Normal(
"means",
mu=np.array([np.median(rv[inst_id == i]) for i in range(num_inst)]),
sigma=200,
shape=num_inst,
)
sigmas = pm.HalfNormal("sigmas", sigma=10, shape=num_inst)
# Compute the RV offset and jitter for each data point depending on its instrument
mean = tt.zeros(len(t))
diag = tt.zeros(len(t))
for i in range(len(inst_names)):
mean += means[i] * (inst_id == i)
diag += (rv_err**2 + sigmas[i] ** 2) * (inst_id == i)
pm.Deterministic("mean", mean)
pm.Deterministic("diag", diag)
resid = rv - mean
def rv_model(x):
return orbit.get_radial_velocity(x, K=K)
kernel = terms.SHOTerm(
sigma=tt.exp(log_sigma_gp), rho=tt.exp(log_rho_gp), Q=1.0 / 3
)
gp = GaussianProcess(kernel, t=t, diag=diag, mean=rv_model)
gp.marginal("obs", observed=resid)
pm.Deterministic("gp_pred", gp.predict(resid, include_mean=False))
pm.Deterministic("rv_phase", rv_model(P * t_phase + tp))
map_soln = model.test_point
map_soln = pmx.optimize(map_soln, [means])
map_soln = pmx.optimize(map_soln, [means, phase])
map_soln = pmx.optimize(map_soln, [means, phase, log_K])
map_soln = pmx.optimize(map_soln, [means, tp, K, log_P, ecs])
map_soln = pmx.optimize(map_soln, [sigmas, log_sigma_gp, log_rho_gp])
map_soln = pmx.optimize(map_soln)
###Output
_____no_output_____
###Markdown
After fitting for the parameters that maximize the posterior probability, we can plot this model to make sure that things are looking reasonable:
###Code
t_pred = np.linspace(t.min() - 400, t.max() + 400, 5000)
with model:
plt.plot(
t_pred, pmx.eval_in_model(rv_model(t_pred), map_soln), "k", lw=0.5
)
detrended = rv - map_soln["mean"] - map_soln["gp_pred"]
plt.errorbar(t, detrended, yerr=rv_err, fmt=",k")
plt.scatter(
t, detrended, c=inst_id, s=8, zorder=100, cmap="tab10", vmin=0, vmax=10
)
plt.xlim(t_pred.min(), t_pred.max())
plt.xlabel("BJD")
plt.ylabel("radial velocity [m/s]")
_ = plt.title("map model", fontsize=14)
###Output
_____no_output_____
###Markdown
That looks fine, so now we can run the MCMC sampler:
###Code
with model:
trace = pmx.sample(
tune=1000,
draws=1000,
start=map_soln,
chains=2,
cores=2,
return_inferencedata=True,
random_seed=[39091, 39095],
)
###Output
_____no_output_____
###Markdown
Then we can look at some summaries of the trace and the constraints on some of the key parameters:
###Code
import corner
import arviz as az
corner.corner(trace, var_names=["P", "K", "tp", "ecc", "omega"])
az.summary(
trace, var_names=["P", "K", "tp", "ecc", "omega", "means", "sigmas"]
)
###Output
_____no_output_____
###Markdown
And finally we can plot the phased RV curve and overplot our posterior inference:
###Code
flat_samps = trace.posterior.stack(sample=("chain", "draw"))
mu = np.mean(flat_samps["mean"].values + flat_samps["gp_pred"].values, axis=-1)
mu_var = np.var(flat_samps["mean"], axis=-1)
jitter_var = np.median(flat_samps["diag"], axis=-1)
period = np.median(flat_samps["P"])
tp = np.median(flat_samps["tp"])
detrended = rv - mu
folded = ((t - tp + 0.5 * period) % period) / period
plt.errorbar(folded, detrended, yerr=np.sqrt(mu_var + jitter_var), fmt=",k")
plt.scatter(
folded,
detrended,
c=inst_id,
s=8,
zorder=100,
cmap="tab10",
vmin=0,
vmax=10,
)
plt.errorbar(
folded + 1, detrended, yerr=np.sqrt(mu_var + jitter_var), fmt=",k"
)
plt.scatter(
folded + 1,
detrended,
c=inst_id,
s=8,
zorder=100,
cmap="tab10",
vmin=0,
vmax=10,
)
x = t_phase + 0.5
y = np.mean(flat_samps["rv_phase"], axis=-1)
plt.plot(x, y, "k", lw=0.5, alpha=0.5)
plt.plot(x + 1, y, "k", lw=0.5, alpha=0.5)
plt.axvline(1, color="k", lw=0.5)
plt.xlim(0, 2)
plt.xlabel("phase")
plt.ylabel("radial velocity [m/s]")
_ = plt.title("posterior inference", fontsize=14)
###Output
_____no_output_____
###Markdown
CitationsAs described in the [citation tutorial](https://docs.exoplanet.codes/en/stable/tutorials/citation/), we can use [citations.get_citations_for_model](https://docs.exoplanet.codes/en/stable/user/api/exoplanet.citations.get_citations_for_model) to construct an acknowledgement and BibTeX listing that includes the relevant citations for this model.
###Code
with model:
txt, bib = xo.citations.get_citations_for_model()
print(txt)
print(bib.split("\n\n")[0] + "\n\n...")
###Output
_____no_output_____
###Markdown
(rv-multi)= RVs with multiple instruments
###Code
import exoplanet
exoplanet.utils.docs_setup()
print(f"exoplanet.__version__ = '{exoplanet.__version__}'")
###Output
_____no_output_____
###Markdown
In this case study, we will look at how we can use exoplanet and PyMC3 to combine datasets from different RV instruments to fit the orbit of an exoplanet system.Before getting started, I want to emphasize that the exoplanet code doesn't have strong opinions about how your data are collected, it only provides extensions that allow PyMC3 to evaluate some astronomy-specific functions.This means that you can build any kind of observation model that PyMC3 supports, and support for multiple instruments isn't really a *feature* of exoplanet, even though it is easy to implement.For the example, we'll use public observations of Pi Mensae which hosts two planets, but we'll ignore the inner planet because the significance of the RV signal is small enough that it won't affect our results.The datasets that we'll use are from the Anglo-Australian Planet Search (AAT) and the HARPS archive.As is commonly done, we will treat the HARPS observations as two independent datasets split in June 2015 when the HARPS hardware was upgraded.Therefore, we'll consider three datasets that we will allow to have different instrumental parameters (RV offset and jitter), but shared orbital parameters and stellar variability.In some cases you might also want to have a different astrophyscial variability model for each instrument (if, for example, the observations are made in very different bands), but we'll keep things simple for this example.The AAT data are available from [The Exoplanet Archive](https://exoplanetarchive.ipac.caltech.edu/) and the HARPS observations can be downloaded from the [ESO Archive](http://archive.eso.org/wdb/wdb/adp/phase3_spectral/form).For the sake of simplicity, we have extracted the HARPS RVs from the archive in advance using [Megan Bedell's harps_tools library](https://github.com/megbedell/harps_tools).To start, download the data and plot them with a (very!) rough zero point correction.
###Code
import numpy as np
import pandas as pd
from astropy.io import ascii
import matplotlib.pyplot as plt
aat = ascii.read(
"https://exoplanetarchive.ipac.caltech.edu/data/ExoData/0026/0026394/data/UID_0026394_RVC_001.tbl"
)
harps = pd.read_csv(
"https://raw.githubusercontent.com/exoplanet-dev/case-studies/main/data/pi_men_harps_rvs.csv",
skiprows=1,
)
harps = harps.rename(lambda x: x.strip().strip("#"), axis=1)
harps_post = np.array(harps.date > "2015-07-01", dtype=int)
t = np.concatenate((aat["JD"], harps["bjd"]))
rv = np.concatenate((aat["Radial_Velocity"], harps["rv"]))
rv_err = np.concatenate((aat["Radial_Velocity_Uncertainty"], harps["e_rv"]))
inst_id = np.concatenate((np.zeros(len(aat), dtype=int), harps_post + 1))
inds = np.argsort(t)
t = np.ascontiguousarray(t[inds], dtype=float)
rv = np.ascontiguousarray(rv[inds], dtype=float)
rv_err = np.ascontiguousarray(rv_err[inds], dtype=float)
inst_id = np.ascontiguousarray(inst_id[inds], dtype=int)
inst_names = ["aat", "harps_pre", "harps_post"]
num_inst = len(inst_names)
for i, name in enumerate(inst_names):
m = inst_id == i
plt.errorbar(
t[m], rv[m] - np.min(rv[m]), yerr=rv_err[m], fmt=".", label=name
)
plt.legend(fontsize=10)
plt.xlabel("BJD")
_ = plt.ylabel("radial velocity [m/s]")
###Output
_____no_output_____
###Markdown
Then set up the probabilistic model.Most of this is similar to the model in the [Radial velocity fitting](https://docs.exoplanet.codes/en/stable/tutorials/rv/) tutorial, but there are a few changes to highlight:1. Instead of a polynomial model for trends, stellar varaiability, and inner planets, we're using a Gaussian process here. This won't have a big effect here, but more careful consideration should be performed when studying lower signal-to-noise systems.2. There are three radial velocity offests and three jitter parameters (one for each instrument) that will be treated independently. This is the key addition made by this case study.
###Code
import pymc3 as pm
import exoplanet as xo
import aesara_theano_fallback.tensor as tt
import pymc3_ext as pmx
from celerite2.theano import terms, GaussianProcess
t_phase = np.linspace(-0.5, 0.5, 5000)
with pm.Model() as model:
# Parameters describing the orbit
log_K = pm.Normal("log_K", mu=np.log(300), sigma=10)
log_P = pm.Normal("log_P", mu=np.log(2093.07), sigma=10)
K = pm.Deterministic("K", tt.exp(log_K))
P = pm.Deterministic("P", tt.exp(log_P))
ecs = pmx.UnitDisk("ecs", testval=np.array([0.7, -0.3]))
ecc = pm.Deterministic("ecc", tt.sum(ecs ** 2))
omega = pm.Deterministic("omega", tt.arctan2(ecs[1], ecs[0]))
phase = pmx.UnitUniform("phase")
tp = pm.Deterministic("tp", 0.5 * (t.min() + t.max()) + phase * P)
orbit = xo.orbits.KeplerianOrbit(
period=P, t_periastron=tp, ecc=ecc, omega=omega
)
# Noise model parameters
log_sigma_gp = pm.Normal("log_sigma_gp", mu=np.log(10), sigma=50)
log_rho_gp = pm.Normal("log_rho_gp", mu=np.log(50), sigma=50)
# Per instrument parameters
means = pm.Normal(
"means",
mu=np.array([np.median(rv[inst_id == i]) for i in range(num_inst)]),
sigma=200,
shape=num_inst,
)
sigmas = pm.HalfNormal("sigmas", sigma=10, shape=num_inst)
# Compute the RV offset and jitter for each data point depending on its instrument
mean = tt.zeros(len(t))
diag = tt.zeros(len(t))
for i in range(len(inst_names)):
mean += means[i] * (inst_id == i)
diag += (rv_err ** 2 + sigmas[i] ** 2) * (inst_id == i)
pm.Deterministic("mean", mean)
pm.Deterministic("diag", diag)
resid = rv - mean
def rv_model(x):
return orbit.get_radial_velocity(x, K=K)
kernel = terms.SHOTerm(
sigma=tt.exp(log_sigma_gp), rho=tt.exp(log_rho_gp), Q=1.0 / 3
)
gp = GaussianProcess(kernel, t=t, diag=diag, mean=rv_model)
gp.marginal("obs", observed=resid)
pm.Deterministic("gp_pred", gp.predict(resid, include_mean=False))
pm.Deterministic("rv_phase", rv_model(P * t_phase + tp))
map_soln = model.test_point
map_soln = pmx.optimize(map_soln, [means])
map_soln = pmx.optimize(map_soln, [means, phase])
map_soln = pmx.optimize(map_soln, [means, phase, log_K])
map_soln = pmx.optimize(map_soln, [means, tp, K, log_P, ecs])
map_soln = pmx.optimize(map_soln, [sigmas, log_sigma_gp, log_rho_gp])
map_soln = pmx.optimize(map_soln)
###Output
_____no_output_____
###Markdown
After fitting for the parameters that maximize the posterior probability, we can plot this model to make sure that things are looking reasonable:
###Code
t_pred = np.linspace(t.min() - 400, t.max() + 400, 5000)
with model:
plt.plot(
t_pred, pmx.eval_in_model(rv_model(t_pred), map_soln), "k", lw=0.5
)
detrended = rv - map_soln["mean"] - map_soln["gp_pred"]
plt.errorbar(t, detrended, yerr=rv_err, fmt=",k")
plt.scatter(
t, detrended, c=inst_id, s=8, zorder=100, cmap="tab10", vmin=0, vmax=10
)
plt.xlim(t_pred.min(), t_pred.max())
plt.xlabel("BJD")
plt.ylabel("radial velocity [m/s]")
_ = plt.title("map model", fontsize=14)
###Output
_____no_output_____
###Markdown
That looks fine, so now we can run the MCMC sampler:
###Code
with model:
trace = pmx.sample(
tune=1000,
draws=1000,
start=map_soln,
chains=2,
cores=2,
return_inferencedata=True,
random_seed=[39091, 39095],
)
###Output
_____no_output_____
###Markdown
Then we can look at some summaries of the trace and the constraints on some of the key parameters:
###Code
import corner
import arviz as az
corner.corner(trace, var_names=["P", "K", "tp", "ecc", "omega"])
az.summary(
trace, var_names=["P", "K", "tp", "ecc", "omega", "means", "sigmas"]
)
###Output
_____no_output_____
###Markdown
And finally we can plot the phased RV curve and overplot our posterior inference:
###Code
flat_samps = trace.posterior.stack(sample=("chain", "draw"))
mu = np.mean(flat_samps["mean"].values + flat_samps["gp_pred"].values, axis=-1)
mu_var = np.var(flat_samps["mean"], axis=-1)
jitter_var = np.median(flat_samps["diag"], axis=-1)
period = np.median(flat_samps["P"])
tp = np.median(flat_samps["tp"])
detrended = rv - mu
folded = ((t - tp + 0.5 * period) % period) / period
plt.errorbar(folded, detrended, yerr=np.sqrt(mu_var + jitter_var), fmt=",k")
plt.scatter(
folded,
detrended,
c=inst_id,
s=8,
zorder=100,
cmap="tab10",
vmin=0,
vmax=10,
)
plt.errorbar(
folded + 1, detrended, yerr=np.sqrt(mu_var + jitter_var), fmt=",k"
)
plt.scatter(
folded + 1,
detrended,
c=inst_id,
s=8,
zorder=100,
cmap="tab10",
vmin=0,
vmax=10,
)
x = t_phase + 0.5
y = np.mean(flat_samps["rv_phase"], axis=-1)
plt.plot(x, y, "k", lw=0.5, alpha=0.5)
plt.plot(x + 1, y, "k", lw=0.5, alpha=0.5)
plt.axvline(1, color="k", lw=0.5)
plt.xlim(0, 2)
plt.xlabel("phase")
plt.ylabel("radial velocity [m/s]")
_ = plt.title("posterior inference", fontsize=14)
###Output
_____no_output_____
###Markdown
CitationsAs described in the [citation tutorial](https://docs.exoplanet.codes/en/stable/tutorials/citation/), we can use [citations.get_citations_for_model](https://docs.exoplanet.codes/en/stable/user/api/exoplanet.citations.get_citations_for_model) to construct an acknowledgement and BibTeX listing that includes the relevant citations for this model.
###Code
with model:
txt, bib = xo.citations.get_citations_for_model()
print(txt)
print(bib.split("\n\n")[0] + "\n\n...")
###Output
_____no_output_____ |
assignments/0315-CUDA_Alternatives_in-class-assignment.ipynb | ###Markdown
[Link to this document's Jupyter Notebook](./0315-CUDA_Alternatives_in-class-assignment.ipynb) In order to successfully complete this assignment you need to participate both individually and in groups during class. If you attend class in-person then have one of the instructors check your notebook and sign you out before leaving class on **Monday March 15**. If you are attending asynchronously, turn in your assignment using D2L no later than **_11:59pm on Monday March 15**. --- In-Class Assignment: Alternatives Agenda for today's class (70 minutes)1. (20 minutes) [Pre class Review](Pre-class-Review)2. (5 minutes) [Submitting CUDA Jobs on the HPCC](Submitting-CUDA-Jobs-on-the-HPCC)4. (20 minutes) [Homework Questions](Homework-Questions)5. (25 minutes) [Introducing MPI](Introducing-MPI) --- 1. Pre class Review[0314--CUDA_Alternatives_pre-class-assignment](0314--CUDA_Alternatives_pre-class-assignment.ipynb)As a class we will discuss the various alternatives to cuda and their pros and cons. --- 2. Submitting CUDA Jobs on the HPCC
###Code
%%writefile cuda_submit.sb
#!/bin/bash
#SBATCH --time=01:00:00
#SBATCH -c 1
#SBATCH -N 1
#SBATCH --gres=gpu:1
#SBATCH --mem=4gb
time srun ./mycudaprogram
#Prints out job statistics
js ${SLURM_JOB_ID}
!sbatch cuda_submit.sb
###Output
_____no_output_____
###Markdown
--- 3. Homework QuestionsHomework is due Thursday of this weeks. What final questions do you have? - [0318-HW3_CUDA](0318-HW3_CUDA.ipynb) --- 4. Introducing MPIOur next big topic in class will be doing "Shared Network Parallization" using MPI (Message Passing Interface). MPI and it's libraries are loaded by default on the HPCC. &9989; **DO THIS:** Get either the Pandemic or Galaxsee example working using MPI on the HPCC. Here are the basic steps:1. Compile the code without X11 options (there are no monitors on the HPC side. 2. Write a submission script (similar to the one below). 3. Submit the job and debug any errors.
###Code
%%writefile cuda_submit.sb
#!/bin/bash
#SBATCH --time=01:00:00
#SBATCH -c 1
#SBATCH -N 10
#SBATCH --mem=40gb
time srun ./mympiprogram
#Prints out job statistics
js ${SLURM_JOB_ID}
###Output
_____no_output_____ |
6-Chapter-6/test_your_knowledge/test_your_knowledge_excel_solution.ipynb | ###Markdown
We will compare the 1-day forcast with historical values.
###Code
forecast_df_shifted_1 = forecast_df.copy()
forecast_df_shifted_1.index = forecast_df.index + pd.Timedelta(days=1)
forecast_df_shifted_1.head()
combined_df = forecast_df_shifted_1.merge(daily_mean_historical, left_index=True, right_index=True)
combined_df.head()
combined_df[['load_d1', 'Actual Load (MWh)']].plot()
###Output
_____no_output_____ |
metrics/cross-entropy.ipynb | ###Markdown
Cross Entropy
###Code
import numpy as np
# Write a function that takes as input two lists Y, P,
# and returns the float corresponding to their cross-entropy.
def cross_entropy(Y, P):
Y = np.float_(Y)
P = np.float_(P)
return -np.sum(Y * np.log(P) + (1 - Y) * np.log(1 - P))
Y=[1,0,1,1]
P=[0.4,0.6,0.1,0.5]
cross_entropy(Y,P)
###Output
_____no_output_____ |
src/classification/sexism_data_preprocessing.ipynb | ###Markdown
Loading Data
###Code
data = pandas.read_csv('./../sexism-data.csv')
new_data=data[data['scores']==1]
new_data
temp_data=data[data['scores']==0]
temp_data=temp_data[0:500]
data=new_data.append(temp_data)
data
train_data, test_data = train_test_split(data)
words = Counter()
word2idx = {}
idx2word = {}
def tokenizeText(sentence):
tokens = word_tokenize(sentence)
return tokens
def sent2idx(split_text):
sent2idx = []
for w in split_text:
if w.lower() in word2idx:
sent2idx.append(word2idx[w.lower()])
else:
sent2idx.append(word2idx['_UNK'])
return sent2idx
def processTextData(df,isTrain):
global words
global word2idx
global idx2word
df = df.copy()
df['tokenized'] = df.texts.apply(lambda x: (tokenizeText(x.lower())))
if isTrain:
for sent in tqdm(df.tokenized.values):
words.update(w for w in sent)
words = sorted(words, key=words.get, reverse=True)
words = ['_PAD','_UNK'] + words
word2idx = {o:i for i,o in enumerate(words)}
idx2word = {i:o for i,o in enumerate(words)}
df['vectorized'] = df.texts.apply(lambda x: sent2idx(x))
return df
train_data = processTextData(train_data,True)
test_data = processTextData(test_data,False)
def label(score):
l=[0,0]
l[score]=1
return l
train_data['label']=train_data['scores'].apply(label)
test_data['label']=test_data['scores'].apply(label)
train_data
class VectorizeData(Dataset):
def __init__(self, df, maxlen=10):
self.maxlen = maxlen
self.df = df
self.df['text_padded'] = self.df.vectorized.apply(lambda x: self.pad_data(x))
def __len__(self):
return self.df.shape[0]
def __getitem__(self, idx):
text = self.df.text_padded.values[idx]
sexism_label = self.df.label.values[idx]
sexism_type = self.df['class'].values[idx]
return text,sexism_label,sexism_type
def pad_data(self, s):
padded = np.zeros((self.maxlen,), dtype=np.int64)
if len(s) > self.maxlen: padded[:] = s[:self.maxlen]
else: padded[:len(s)] = s
return padded
trainDataset = VectorizeData(train_data)
testDataset = VectorizeData(test_data)
trainLoader = DataLoader(dataset=trainDataset, batch_size=100, shuffle=True)
testLoader = DataLoader(dataset=testDataset, batch_size=100, shuffle=False)
for i, samples in enumerate(trainLoader):
print(i)
print(samples[1])
break
###Output
0
[tensor([1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1,
1, 1, 1, 1]), tensor([0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0])]
###Markdown
Sentence to model input
###Code
def pad_data(s,maxlen):
padded = np.zeros((maxlen,), dtype=np.int64)
if len(s) > maxlen: padded[:] = s[:maxlen]
else: padded[:len(s)] = s
return padded
def sentToTensor(text,word2idx,vectors):
padded_vector = pad_data(sent2idx(tokenizeText(text)),10)
return torch.tensor(padded_vector).reshape(1,-1)
###Output
_____no_output_____
###Markdown
Extrapolating to MultiClass Problem
###Code
class VectorizeDataMultiClass(Dataset):
def __init__(self, df, maxlen=10):
self.maxlen = maxlen
self.df = df
self.df['text_padded'] = self.df.vectorized.apply(lambda x: self.pad_data(x))
def __len__(self):
return self.df.shape[0]
def __getitem__(self, idx):
text = self.df.text_padded.values[idx]
sexism_label = self.df.scores.values[idx]
sexism_type = self.df['class'].values[idx]
if sexism_label == 0 and sexism_type == 0:
return text,0
if sexism_label == 1 and sexism_type == 1:
return text,1
if sexism_label == 1 and sexism_type == 2:
return text,2
def pad_data(self, s):
padded = np.zeros((self.maxlen,), dtype=np.int64)
if len(s) > self.maxlen: padded[:] = s[:self.maxlen]
else: padded[:len(s)] = s
return padded
trainDatasetMC = VectorizeDataMultiClass(train_data)
testDatasetMC = VectorizeDataMultiClass(test_data)
trainLoaderMC = DataLoader(dataset=trainDatasetMC, batch_size=100, shuffle=True)
testLoaderMC = DataLoader(dataset=testDatasetMC, batch_size=100, shuffle=False)
print('Multiclass data')
for i, samples in enumerate(trainLoader):
print(i)
print(samples[0])
print(samples[1])
print(samples[2])
break
###Output
0
tensor([[ 6370, 2674, 1070, 1070, 1941, 373, 2, 1, 7, 373],
[ 8, 2616, 1, 1118, 1070, 445, 1115, 690, 1, 1070],
[16686, 445, 373, 1090, 1, 1469, 1115, 1070, 1941, 1941],
[ 2674, 7, 3683, 8, 812, 4253, 1, 1090, 2674, 690],
[ 812, 1070, 1, 1090, 1115, 445, 1415, 3176, 690, 1115],
[ 3176, 7, 2674, 7, 7, 812, 1, 2674, 445, 966],
[ 8, 1, 6370, 7, 373, 1, 2616, 1070, 1115, 1],
[ 8, 1090, 373, 1, 1941, 1070, 373, 373, 8, 1958],
[ 1415, 2674, 690, 1415, 3176, 1, 1070, 445, 1090, 1],
[ 1070, 812, 1, 1090, 2674, 8, 373, 1, 6370, 690],
[ 926, 690, 1090, 171, 373, 1, 2616, 8, 812, 1469],
[ 2674, 690, 7, 1115, 1090, 1958, 1115, 690, 7, 3176],
[ 8, 1, 926, 8, 3683, 690, 1, 1958, 1118, 1],
[ 2674, 690, 1, 1090, 2674, 690, 1, 1115, 690, 7],
[ 1090, 2674, 690, 1, 1958, 1070, 1469, 1118, 1, 3176],
[ 1118, 445, 966, 3, 1, 1090, 2674, 8, 373, 1],
[ 373, 812, 7, 1941, 373, 2674, 1070, 1090, 1, 2616],
[ 6370, 690, 1, 6370, 1070, 445, 926, 1469, 1, 926],
[ 2616, 1070, 1115, 1, 1090, 2674, 8, 373, 1, 3],
[16686, 445, 1469, 4253, 8, 812, 4253, 1, 7, 373],
[ 926, 7, 373, 1090, 1, 812, 8, 4253, 2674, 1090],
[ 1090, 2674, 8, 373, 1, 8, 373, 1, 7, 926],
[ 1415, 1070, 966, 690, 1, 1415, 2674, 690, 1415, 3176],
[ 4253, 1070, 1090, 1, 1090, 2674, 690, 966, 1, 1415],
[ 1070, 1415, 1090, 1070, 1958, 690, 1115, 1, 6370, 690],
[ 2616, 690, 966, 8, 812, 8, 373, 1090, 1, 1958],
[ 1070, 812, 690, 1, 1070, 2616, 1, 1090, 2674, 690],
[ 1941, 1070, 373, 1090, 690, 1469, 1, 1958, 1118, 1],
[ 6370, 690, 1, 2674, 7, 3683, 690, 1, 1090, 2674],
[ 1941, 926, 690, 7, 373, 690, 1, 373, 7, 1118],
[ 1090, 2674, 8, 373, 1, 7, 8, 812, 1090, 1],
[ 1090, 6370, 1070, 1, 1415, 1070, 445, 1941, 926, 690],
[ 1070, 445, 1090, 1, 6370, 8, 1090, 2674, 1, 1090],
[ 812, 8, 4253, 2674, 1090, 1, 3683, 8, 1958, 690],
[ 7, 812, 1070, 1090, 2674, 690, 1115, 1, 4253, 1070],
[ 1469, 1070, 1, 8, 1090, 1, 2616, 1070, 1115, 1],
[ 1118, 1070, 445, 1090, 445, 1958, 690, 1115, 3, 1],
[ 1090, 2674, 690, 1115, 690, 1, 7, 1115, 690, 1],
[ 8, 1, 7, 8, 812, 1090, 1, 373, 690, 947],
[ 2674, 7, 1941, 1941, 1118, 1, 1958, 8, 1115, 1090],
[ 1090, 2674, 8, 373, 1, 4253, 445, 1118, 1, 1090],
[ 8, 1090, 1, 1469, 1070, 690, 373, 812, 1090, 1],
[ 1090, 1070, 812, 8, 4253, 2674, 1090, 3, 1, 926],
[ 1115, 7, 1941, 690, 3, 1, 7, 1958, 445, 373],
[ 8, 812, 1, 1090, 2674, 1115, 690, 690, 1, 6370],
[ 1, 1090, 1070, 1469, 7, 1118, 1, 966, 7, 1115],
[ 6370, 2674, 690, 812, 1, 1118, 1070, 445, 1, 4253],
[ 6370, 1070, 966, 690, 812, 1, 7, 1115, 690, 1],
[ 373, 1070, 1, 1958, 690, 4253, 8, 812, 1, 1090],
[ 6370, 2674, 7, 1090, 1, 6370, 690, 1, 373, 7],
[ 926, 8, 3176, 690, 1, 6370, 2674, 7, 1090, 1],
[ 373, 445, 812, 1469, 7, 1118, 1, 966, 1070, 1115],
[ 1941, 445, 1090, 1, 1118, 1070, 445, 1115, 1, 2616],
[ 373, 8, 812, 1415, 690, 1, 445, 1941, 926, 1070],
[ 1958, 690, 373, 1090, 1, 966, 690, 966, 690, 1],
[ 373, 6370, 8, 1941, 690, 1, 1090, 1070, 1, 373],
[ 8, 1, 4253, 1070, 1090, 1090, 7, 1, 1958, 690],
[ 8, 1, 6370, 1070, 445, 926, 1469, 1, 373, 7],
[ 1070, 966, 4253, 2, 1, 8, 1090, 171, 373, 1],
[16686, 7, 1941, 1, 966, 690, 1090, 1115, 1070, 1941],
[ 1090, 2674, 8, 373, 1, 1415, 1115, 7, 1415, 3176],
[ 1958, 690, 1415, 1070, 966, 690, 1, 1070, 812, 690],
[ 7, 1, 926, 8, 1090, 1090, 926, 690, 1, 966],
[ 1090, 2674, 690, 1115, 690, 171, 373, 1, 812, 1070],
[ 4253, 1115, 7, 812, 1, 2616, 8, 690, 373, 1090],
[ 6370, 1070, 6370, 1, 2674, 1070, 6370, 1, 373, 690],
[ 2674, 7, 1941, 1941, 1118, 1, 1090, 2674, 445, 1115],
[ 3683, 690, 1115, 1118, 1, 6370, 690, 926, 926, 1],
[ 1090, 2674, 690, 1, 6370, 7, 1118, 1, 8, 373],
[ 8, 1941, 2674, 1070, 812, 690, 1, 8, 373, 1],
[ 7, 1, 926, 1070, 3683, 690, 926, 1118, 1, 6370],
[ 2616, 1070, 1415, 445, 373, 1, 1070, 812, 1, 6370],
[ 373, 1090, 1115, 690, 812, 4253, 1090, 2674, 1, 8],
[ 1090, 7, 4253, 1, 7, 1, 2616, 1115, 8, 690],
[ 1090, 2674, 8, 373, 1, 6370, 8, 926, 926, 1],
[ 966, 8, 373, 373, 1, 966, 1118, 1, 1958, 7],
[ 1415, 2674, 690, 690, 373, 690, 1, 2616, 1070, 1115],
[ 6370, 2674, 7, 1090, 1, 17, 1, 17, 1, 17],
[ 2616, 690, 966, 8, 812, 8, 373, 966, 1, 8],
[ 7, 373, 1, 926, 1070, 812, 4253, 1, 7, 373],
[ 7, 812, 1469, 1, 2674, 690, 1115, 690, 1, 2674],
[ 1070, 445, 1115, 1, 1070, 812, 926, 1118, 1, 926],
[ 1090, 2674, 690, 1, 966, 1070, 3683, 690, 966, 690],
[ 1090, 2674, 690, 1, 373, 1070, 7, 373, 1, 1415],
[ 2674, 690, 1118, 1, 690, 3683, 690, 1115, 1118, 1070],
[ 6370, 690, 1958, 373, 1, 1415, 7, 812, 1, 4253],
[ 3176, 7, 2674, 8, 1, 1090, 1070, 2674, 1, 2674],
[ 2616, 1070, 1115, 690, 3683, 690, 1115, 1, 7, 812],
[ 8, 1, 6370, 1070, 445, 926, 1469, 1, 926, 8],
[ 926, 690, 1090, 171, 373, 1, 1958, 690, 1, 1415],
[ 2674, 690, 1115, 690, 1, 8, 373, 1, 7, 1],
[ 6370, 2674, 1118, 1, 2616, 690, 966, 8, 812, 8],
[ 6370, 7, 8, 373, 1090, 1, 1415, 926, 445, 1090],
[ 1469, 690, 7, 1115, 1, 3683, 8, 690, 6370, 690],
[ 1070, 445, 1090, 373, 1090, 7, 812, 1469, 8, 812],
[ 2674, 7, 1958, 8, 1090, 7, 1090, 1, 1941, 1115],
[ 2616, 1070, 1115, 1090, 445, 812, 7, 1090, 690, 926],
[ 1090, 2674, 8, 373, 1, 8, 373, 1, 373, 1070],
[ 8, 1, 373, 6370, 690, 7, 1115, 1, 8, 1],
[ 6370, 690, 1, 7, 1415, 1090, 445, 7, 926, 926]])
[tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1]), tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0])]
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 2, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0])
|
BERT_Custom/Convert_txt.ipynb | ###Markdown
counting lines
###Code
import re
file_1 = open('train_alltags_2.txt')
file_2 = open('test_alltags_2.txt')
sentence_regex = re.compile("^(Sentence: )\d+")
sent_no = 0
sent_count_train = 0
sent_count_test = 0
tmp =0
for line in file_1.readlines():
if len(line) <=3:
continue
sentence_search = sentence_regex.search(line)
sent_no = int(sentence_search.group(0)[9:])
if sent_no != tmp:
sent_count_train += 1
tmp = sent_no
for line in file_2.readlines():
if len(line) <=3:
continue
sentence_search = sentence_regex.search(line)
sent_no = int(sentence_search.group(0)[9:])
if sent_no != tmp:
sent_count_test += 1
tmp = sent_no
print(sent_count_train, sent_count_test)
###Output
_____no_output_____ |
M33_fitting/Testing_Completeness_Experiement.ipynb | ###Markdown
Draw a Scechter Function with a -2 powerlaw
###Code
def Cliff_pl_draw_schechter(ndraw,alpha,llim,ulim,M_c=1.e4,rseed=None,returnfull=False):
alphapp=alpha+1.
np.random.seed(rseed)
rand=np.random.rand(ndraw)
pdf=(rand*ulim**alphapp + (1.-rand)*llim**alphapp)**(1./alphapp)
rand=np.random.rand(ndraw)
select = rand < np.exp(-pdf/M_c)
if returnfull:
return pdf[select], pdf
else:
return pdf[select]
drawn_masses=np.log10(Cliff_pl_draw_schechter(4000, -2, 300, 10**7, M_c=10**4.25 ))
#Draw_ages and NMS with replacement from our distributions
drawn_ages=np.random.choice(full_ages, size=len(drawn_masses))
drawn_nms=np.random.choice(full_nmses, size=len(drawn_masses))
###Output
_____no_output_____
###Markdown
Apply a completeness to determine detection1. Asses the completeness for a given cluster, given its mass, age, and NMS2. Draw a random number between 0:1 and if the number is less that the completeness value, the cluster was "Detected"
###Code
#Defining the Compleness funcitons
def pobs(M, mlim):
k=6.3665
y=(1.+ exp(-k*(M-mlim)))**(-1)
return y
def c(NMS):
m=0.7117385589429568
b=0.6066972150830925
y= (m*NMS)+b
if NMS < 2.53:
return 2.413
if 2.53 <= NMS <= 3.49:
return y
if NMS > 3.49:
return 3.054
def M_lim(Tau, NMS):
#fit from completeness limit
a=0.06005753215407492
b=1.0190688706002926
c_=c(NMS)
Tau_min=6.71
y= a*np.exp(b*(Tau-Tau_min))+c_
return y
drawn_mlims=np.zeros((len(drawn_masses)))
for i in range(len(drawn_masses)):
drawn_mlims[i]=M_lim(drawn_ages[i], drawn_nms[i])
#determin "detected" clusters
detected_masses=[]
detected_mlims=[]
for i in range(len(drawn_masses)):
rand= np.random.rand()
if rand < pobs(drawn_masses[i], drawn_mlims[i]):
detected_masses.append(drawn_masses[i])
detected_mlims.append(drawn_mlims[i])
detected_masses=np.array(detected_masses)
detected_mlims=np.array(detected_mlims)
# only feed in detected clusters, and only take above the 50% compleness and re-run
using_dm=detected_masses[np.where(detected_masses > detected_mlims)]
using_mlims=10**detected_mlims[np.where(detected_masses > detected_mlims)]
#Definging necesary funcitons
def lnobs_like(M, mlim):
k=6.3665
return -np.log(1.+ exp(-k*(M-mlim)))
def Shecter_Z(M, mlim, alpha, M_c):
x = M/M_c
k=6.3665
pobs= 1./(1.+ exp((-k)*(np.log10(M)-mlim)))
return (x**alpha) * exp(-x) * pobs
def lnlike(theta, M, mlim):
alpha, M_c = theta
lin_M_c= 10.**M_c
lin_M= 10**M
x= lin_M/lin_M_c
ln_pobs=lnobs_like(M, np.log10(mlim))
norm= np.zeros(len(M))
err=np.zeros(len(M))
for i in range(len(M)):
norm[i], err[i] = quad(Shecter_Z, mlim[i], 1.e7, args=(np.log10(mlim[i]), alpha, lin_M_c))
lnlike = np.sum((-x) + alpha*np.log(x) + ln_pobs - np.log(norm))
return lnlike
def lnprior(theta):
alpha, M_c = theta
if -3 <= alpha <= -1 and 3 <= M_c <= 8:
return 0.0
return -np.inf
def lnprob(theta, M, mlim):
lp = lnprior(theta)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(theta, M, mlim)
#Running the Maximum liklihood Fit
nll = lambda *args: -lnprob(*args)
starting_point=np.array([-2., 4.25])
fd_result=opt.minimize(nll, x0=starting_point, args=(using_dm, using_mlims))
fd_result['x']
###Output
_____no_output_____ |
Model backlog/Train XGBM/12-melanoma-5fold-xgbm-basic-fts-external-data-oof.ipynb | ###Markdown
Dependencies
###Code
import warnings, json, re, math
from melanoma_utility_scripts import *
from kaggle_datasets import KaggleDatasets
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import KFold, RandomizedSearchCV, GridSearchCV
from xgboost import XGBClassifier
SEED = 42
seed_everything(SEED)
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Model parameters
###Code
config = {
"N_FOLDS": 5,
"N_USED_FOLDS": 5,
"DATASET_PATH": 'melanoma-256x256'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
###Output
_____no_output_____
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/siim-isic-melanoma-classification/'
train = pd.read_csv(f"/kaggle/input/{config['DATASET_PATH']}/train.csv")
train_ext = pd.read_csv(f"/kaggle/input/isic2019-256x256/train.csv")
train_malig_1 = pd.read_csv(f"/kaggle/input/malignant-v2-256x256/train_malig_1.csv")
train_malig_3 = pd.read_csv(f"/kaggle/input/malignant-v2-256x256/train_malig_3.csv")
train['external'] = 0
train_ext['external'] = 1
train_malig_1['external'] = 0
train_malig_3['external'] = 0
train = pd.concat([train, train_ext, train_malig_1, train_malig_3])
test = pd.read_csv(database_base_path + 'test.csv')
print('Train samples: %d' % len(train))
display(train.head())
print(f'Test samples: {len(test)}')
display(test.head())
display(train.describe())
###Output
_____no_output_____
###Markdown
Missing values
###Code
# age_approx (mean)
train['age_approx'].fillna(train['age_approx'].mean(), inplace=True)
test['age_approx'].fillna(train['age_approx'].mean(), inplace=True)
# anatom_site_general_challenge (NaN)
train['anatom_site_general_challenge'].fillna('NaN', inplace=True)
test['anatom_site_general_challenge'].fillna('NaN', inplace=True)
# sex (mode)
train['sex'].fillna(train['sex'].mode()[0], inplace=True)
test['sex'].fillna(train['sex'].mode()[0], inplace=True)
###Output
_____no_output_____
###Markdown
Feature engineering
###Code
### Label ecoding
enc = LabelEncoder()
train['sex_enc'] = enc.fit_transform(train['sex'].astype('str'))
test['sex_enc'] = enc.transform(test['sex'].astype('str'))
### One-hot ecoding
# train = pd.concat([train, pd.get_dummies(train['sex'], prefix='sex_enc', drop_first=True)], axis=1)
# test = pd.concat([test, pd.get_dummies(test['sex'], prefix='sex_enc', drop_first=True)], axis=1)
### Mean ecoding
# Sex
train['sex_mean'] = train['sex'].map(train.groupby(['sex'])['target'].mean())
test['sex_mean'] = test['sex'].map(train.groupby(['sex'])['target'].mean())
# # External features
# train_img_ft = pd.read_csv('../input/landscape/TrainSuperTab.csv')
# test_img_ft = pd.read_csv('../input/landscape/TestSuperTab.csv')
# ext_fts = ['V1', 'V2', 'V3', 'V4','V5', 'V6', 'V7', 'V8', 'V9', 'V10', 'V11', 'V12',
# 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20', 'V21', 'V22', 'V23', 'V24', 'V25',
# 'V26', 'V27', 'V28', 'V29', 'V30', 'V31', 'V32', 'V33', 'V34', 'V35', 'V36', 'V37']
# for ft in ext_fts:
# train[ft] = train_img_ft[ft]
# test[ft] = test_img_ft[ft]
print('Train set')
display(train.head())
print('Test set')
display(test.head())
###Output
Train set
###Markdown
Model
###Code
features = ['age_approx', 'sex_mean']
ohe_features = [col for col in train.columns if 'enc' in col]
features += ohe_features
# External features
# features += ext_fts
print(features)
# Hyperparameter grid
param_grid = {
'max_depth': list(range(2, 12, 2)),
'learning_rate': list(np.logspace(np.log10(0.005), np.log10(0.5), base=10, num=1000)),
'reg_alpha': list(np.linspace(0, 1)),
'reg_lambda': list(np.linspace(0, 1)),
'colsample_bytree': list(np.linspace(0.3, 1, 10)),
'subsample': list(np.linspace(0.5, 1, 100)),
'scale_pos_weight': list(np.linspace(1, (len(train[train['target'] == 0]) / len(train[train['target'] == 1])), 10)),
}
skf = KFold(n_splits=config['N_USED_FOLDS'], shuffle=True, random_state=SEED)
def get_idxs():
for fold,(idxT, idxV) in enumerate(skf.split(np.arange(15))):
x_train = train[(train['tfrecord'].isin(idxT) & (train['external'] == 0)) | # 2020 data
(train['tfrecord'].isin(idxT * 2) & (train['external'] == 1)) | # 2018 data
(train['tfrecord'].isin(idxT + 30) & (train['external'] == 0)) | # 2019 & 2018 data (malig)
(train['tfrecord'].isin(idxT + 15) & (train['external'] == 0)) # new data (malig)
]
x_valid = train[~((train['tfrecord'].isin(idxT) & (train['external'] == 0)) | # 2020 data
(train['tfrecord'].isin(idxT * 2) & (train['external'] == 1)) | # 2018 data
(train['tfrecord'].isin(idxT + 30) & (train['external'] == 0)) | # 2019 & 2018 data (malig)
(train['tfrecord'].isin(idxT + 15) & (train['external'] == 0))) # new data (malig)
]
yield x_train.index, x_valid.index
# Model
model = XGBClassifier(n_estimators=300, random_state=SEED)
grid_search = RandomizedSearchCV(param_distributions=param_grid, estimator=model, scoring='roc_auc',
cv=iter(get_idxs()), n_jobs=-1, n_iter=100, verbose=1)
result = grid_search.fit(train[features], train['target'])
print("Best: %f using %s" % (result.best_score_, result.best_params_))
means = result.cv_results_['mean_test_score']
stds = result.cv_results_['std_test_score']
params = result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
params = result.best_params_
###Output
Fitting 5 folds for each of 100 candidates, totalling 500 fits
###Markdown
Training
###Code
skf = KFold(n_splits=config['N_USED_FOLDS'], shuffle=True, random_state=SEED)
test['target'] = 0
model_list = []
for fold,(idxT, idxV) in enumerate(skf.split(np.arange(15))):
print(f'\nFOLD: {fold+1}')
print(f'TRAIN: {idxT} VALID: {idxV}')
train[f'fold_{fold+1}'] = train.apply(lambda x: 'train' if x['tfrecord'] in idxT else 'validation', axis=1)
x_train = train[train['tfrecord'].isin(idxT)]
y_train = x_train['target']
x_valid = train[~train['tfrecord'].isin(idxT)]
y_valid = x_valid['target']
model = XGBClassifier(**params, random_state=SEED)
model.fit(x_train[features], y_train, eval_set=[(x_valid[features], y_valid)], eval_metric='auc', verbose=0)
model_list.append(model)
# Evaludation
preds = model.predict_proba(train[features])[:, 1]
train[f'pred_fold_{fold+1}'] = preds
# Inference
preds = model.predict_proba(test[features])[:, 1]
test[f'pred_fold_{fold+1}'] = preds
test['target'] += preds / config['N_USED_FOLDS']
###Output
FOLD: 1
TRAIN: [ 1 2 3 4 5 6 7 8 10 12 13 14] VALID: [ 0 9 11]
FOLD: 2
TRAIN: [ 0 1 2 3 4 6 7 9 10 11 12 14] VALID: [ 5 8 13]
FOLD: 3
TRAIN: [ 0 3 4 5 6 7 8 9 10 11 12 13] VALID: [ 1 2 14]
FOLD: 4
TRAIN: [ 0 1 2 3 5 6 8 9 11 12 13 14] VALID: [ 4 7 10]
FOLD: 5
TRAIN: [ 0 1 2 4 5 7 8 9 10 11 13 14] VALID: [ 3 6 12]
###Markdown
Model evaluation
###Code
def func(x):
if x['fold_1'] == 'validation':
return x['pred_fold_1']
elif x['fold_2'] == 'validation':
return x['pred_fold_2']
elif x['fold_3'] == 'validation':
return x['pred_fold_3']
elif x['fold_4'] == 'validation':
return x['pred_fold_4']
elif x['fold_5'] == 'validation':
return x['pred_fold_5']
train['pred'] = train.apply(lambda x: func(x), axis=1)
auc_oof = roc_auc_score(train['target'], train['pred'])
print(f'Overall OOF AUC = {auc_oof:.3f}')
df_oof = train[['image_name', 'target', 'pred']]
df_oof.to_csv('oof.csv', index=False)
display(df_oof.head())
display(df_oof.describe().T)
###Output
Overall OOF AUC = 0.664
###Markdown
Feature importance
###Code
for n_fold, model in enumerate(model_list):
print(f'Fold: {n_fold + 1}')
feature_importance = model.get_booster().get_score(importance_type='weight')
keys = list(feature_importance.keys())
values = list(feature_importance.values())
importance = pd.DataFrame(data=values, index=keys,
columns=['score']).sort_values(by='score',
ascending=False)
plt.figure(figsize=(16, 8))
sns.barplot(x=importance.score.iloc[:20],
y=importance.index[:20],
orient='h',
palette='Reds_r')
plt.show()
###Output
Fold: 1
###Markdown
Model evaluation
###Code
display(evaluate_model(train, config['N_USED_FOLDS']).style.applymap(color_map))
display(evaluate_model_Subset(train, config['N_USED_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Adversarial Validation
###Code
### Adversarial set
adv_train = train.copy()
adv_test = test.copy()
adv_train['dataset'] = 1
adv_test['dataset'] = 0
x_adv = pd.concat([adv_train, adv_test], axis=0)
y_adv = x_adv['dataset']
### Adversarial model
model_adv = XGBClassifier(**params, random_state=SEED)
model_adv.fit(x_adv[features], y_adv, eval_metric='auc', verbose=0)
### Preds
preds = model_adv.predict_proba(x_adv[features])[:, 1]
### Plot feature importance and ROC AUC curve
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10))
# Feature importance
feature_importance = model_adv.get_booster().get_score(importance_type='weight')
keys = list(feature_importance.keys())
values = list(feature_importance.values())
importance = pd.DataFrame(data=values, index=keys,
columns=['score']).sort_values(by='score',
ascending=False)
ax1.set_title('Feature Importances')
sns.barplot(x=importance.score.iloc[:20],
y=importance.index[:20],
orient='h',
palette='Reds_r',
ax=ax1)
# Plot ROC AUC curve
fpr_train, tpr_train, _ = roc_curve(y_adv, preds)
roc_auc_train = auc(fpr_train, tpr_train)
ax2.set_title('ROC AUC curve')
ax2.plot(fpr_train, tpr_train, color='blue', label='Adversarial AUC = %0.2f' % roc_auc_train)
ax2.legend(loc = 'lower right')
ax2.plot([0, 1], [0, 1],'r--')
ax2.set_xlim([0, 1])
ax2.set_ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
train['pred'] = 0
for n_fold in range(config['N_USED_FOLDS']):
train['pred'] += train[f'pred_fold_{n_fold+1}'] / config['N_FOLDS']
print('Label/prediction distribution')
print(f"Train positive labels: {len(train[train['target'] > .5])}")
print(f"Train positive predictions: {len(train[train['pred'] > .5])}")
print(f"Train positive correct predictions: {len(train[(train['target'] > .5) & (train['pred'] > .5)])}")
print('Top 10 samples')
display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].query('target == 1').head(10))
print('Top 10 predicted positive samples')
display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].query('pred > .5').head(10))
###Output
Label/prediction distribution
Train positive labels: 8502
Train positive predictions: 4347
Train positive correct predictions: 1431
Top 10 samples
###Markdown
Visualize test predictions
###Code
print(f"Test predictions {len(test[test['target'] > .5])}|{len(test[test['target'] <= .5])}")
print('Top 10 samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].query('target > .5').head(10))
###Output
Test predictions 377|10605
Top 10 samples
###Markdown
Test set predictions
###Code
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission['target'] = test['target']
fig = plt.subplots(figsize=(20, 6))
plt.hist(submission['target'], bins=100)
plt.title('Preds', size=18)
plt.show()
display(submission.head(10))
display(submission.describe())
submission[['image_name', 'target']].to_csv('submission.csv', index=False)
###Output
_____no_output_____ |
notebooks/vvl/Final_scale_e3t_forcing.ipynb | ###Markdown
Read in e3t and create a +/-2 m SSH versions
###Code
import xarray as xr
import numpy as np
import time
from datetime import datetime, timedelta
from dateutil.parser import parse
import os
from netCDF4 import Dataset
###Output
_____no_output_____
###Markdown
User input
###Code
date_begin = parse('5 june 2015')
date_end = parse('12 june 2015')
path = '/results2/SalishSea/nowcast-green.201806/'
filetype = 'carp_T'
depth_change = 2
out_e3t_frac = '/home/rmueller/Projects/MIDOSS/analysis-rachael/notebooks/vvl/e3t_frac_dz_2.nc'
# pick Salmon Bank location [256,265], but remember that MOHID is transposed! such that SSC [yloc_ssc,xloc_ssc]-> [xloc_ssc,yloc_ssc]_mohid = [yloc_mohid,xloc_mohid]
yloc_mohid = 265
xloc_mohid = 256
def mung_array(SSC_gridded_array, array_slice_type):
"""Transform an array containing SalishSeaCast-gridded data and transform it
into a MOHID-gridded array by:
1) Cutting off the grid edges
2) Transposing the X and Y axes
3) Flipping the depth dimension, if it is present
4) Converting the NaNs to 0
:arg SSC_gridded_array: SalishSeaCast-gridded array
:type numpy.ndarray: :py:class:'ndarray'
:arg array_slice_type: str, one of '2D' or '3D'
:type str: :py:class:'str'
:return MOHID_gridded_array: MOHID-gridded array produced by applying operation
1-4 on SSC_gridded_array
:type numpy.ndarray: :py:class:'ndarray'
"""
shape = SSC_gridded_array.shape
ndims = len(shape)
assert(array_slice_type in ('2D', '3D')), f"Invalid option {array_slice_type}. array_slice_type must be one of ('2D', '3D')"
if array_slice_type is '2D':
assert(ndims in (2,3)), f'The shape of the array given is {shape}, while the option chosen was {array_slice_type}'
if ndims == 2:
MOHID_gridded_array = SSC_gridded_array[1:897:,1:397]
del(SSC_gridded_array)
MOHID_gridded_array = np.transpose(MOHID_gridded_array, [1,0])
else:
MOHID_gridded_array = SSC_gridded_array[...,1:897:,1:397]
del(SSC_gridded_array)
MOHID_gridded_array = np.transpose(MOHID_gridded_array, [0,2,1])
else:
assert(ndims in (3,4)), f'The shape of the array given is {shape}, while the option chosen was {array_slice_type}'
MOHID_gridded_array = SSC_gridded_array[...,1:897:,1:397]
del(SSC_gridded_array)
if ndims == 3:
MOHID_gridded_array = np.transpose(MOHID_gridded_array, [0,2,1])
MOHID_gridded_array = np.flip(MOHID_gridded_array, axis = 0)
else:
MOHID_gridded_array = np.transpose(MOHID_gridded_array, [0,1,3,2])
MOHID_gridded_array = np.flip(MOHID_gridded_array, axis = 1)
MOHID_gridded_array = np.nan_to_num(MOHID_gridded_array).astype('float64')
return MOHID_gridded_array
###Output
_____no_output_____
###Markdown
Ashu's function for writing HDF5 file
###Code
def write_grid(data, datearrays, metadata, filename, groupname, accumulator, compression_level):
shape = data[0].shape
with h5py.File(filename) as f:
time_group = f.get('/Time')
if time_group is None:
time_group = f.create_group('/Time')
data_group_path = f'/Results/{groupname}'
data_group = f.get(data_group_path)
if data_group is None:
data_group = f.create_group(data_group_path)
for i, datearray in enumerate(datearrays):
numeric_attribute = ((5 - len(str(i + accumulator))) * '0') + str(i + accumulator)
child_name = 'Time_' + numeric_attribute
timestamp = time_group.get(child_name)
if timestamp is None:
dataset = time_group.create_dataset(
child_name,
shape = (6,),
data = datearray,
chunks = (6,),
compression = 'gzip',
compression_opts = compression_level
)
time_metadata = {
'Maximum' : np.array(datearray[0]),
'Minimum' : np.array([-0.]),
'Units' : b'YYYY/MM/DD HH:MM:SS'
}
dataset.attrs.update(time_metadata)
else:
assert (np.asarray(timestamp) == datearray).all(), f'Time record {child_name} exists and does not match with {datearray}'
child_name = groupname + '_' + numeric_attribute
if data_group.get(child_name) is not None:
print(f'Dataset already exists at {child_name}')
else:
dataset = data_group.create_dataset(
child_name,
shape = shape,
data = data[i],
chunks = shape,
compression = 'gzip',
compression_opts = compression_level
)
dataset.attrs.update(metadata)
###Output
_____no_output_____
###Markdown
Generate list of dates from user input
###Code
daterange = [date_begin, date_end]
# append all filename strings within daterange to lists
e3t_list = []
for day in range(np.diff(daterange)[0].days + 1):
datestamp = daterange[0] + timedelta(days = day)
datestr1 = datestamp.strftime('%d%b%y').lower()
datestr2 = datestamp.strftime('%Y%m%d')
# check if file exists. exit if it does not. add path to list if it does.
file_path = f'{path}{datestr1}/SalishSea_1h_{datestr2}_{datestr2}_{filetype}.nc'
if not os.path.exists(file_path):
print(f'File {file_path} not found. Check Directory and/or Date Range.')
e3t_list.append(file_path)
e3t_list
###Output
_____no_output_____
###Markdown
Create mask
###Code
mask = mung_array(xr.open_dataset('https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSn3DMeshMaskV17-02').isel(time = 0).tmask.values, '3D')
###Output
_____no_output_____
###Markdown
Test process with one file
###Code
data = xr.open_dataset(e3t_list[0])
datetimelist = data.time_counter.values.astype('datetime64[s]').astype(datetime)
datearrays = [np.array(
[d.year, d.month, d.day, d.hour, d.minute,d.second]
).astype('float64') for d in datetimelist]
del(datetimelist)
e3t = data.e3t.values
e3t = mung_array(e3t, '3D')
e3t = e3t*mask
metadata = {
'FillValue' : np.array([0.]),
'Units' : b'?C'
}
###Output
_____no_output_____
###Markdown
Create a matrix of %depth values for all locations and times by looping through time and space
###Code
total_depth = e3t.sum(1)
e3t_frac_dz = np.empty_like(e3t)
if os.path.isfile('/home/rmueller/data/vvl/test_e3t_frac.nc'):
test = xr.open_dataset('/home/rmueller/data/vvl/test_e3t_frac.nc')
print('Loading e3t_frac_dz from file')
else:
print('Creating matrix of percent total depth for e3t levels (this will take some time)')
for t in range(e3t.shape[0]):
for i in range(e3t.shape[2]):
for j in range(e3t.shape[3]):
for z in range(e3t.shape[1]):
e3t_frac_dz[t,z,i,j] = e3t[t,z,i,j]/total_depth[t,i,j]
print('saving to ', out_e3t_frac)
# convert to xarray for ease of output
xrfrac = xr.DataArray(e3t_frac_dz)
xrfrac.to_netcdf('/home/rmueller/data/vvl/test_e3t_frac.nc')
# Calculate new e3t based on desired depth change
e3t_new = (total_depth[1,256,265] + depth_change) * e3t_frac_dz
e3t_new.shape
###Output
_____no_output_____
###Markdown
Test process with multiple files
###Code
for file_path in e3t_list:
data = xr.open_dataset(file_path)
datetimelist = data.time_counter.values.astype('datetime64[s]').astype(datetime)
datearrays = [np.array(
[d.year, d.month, d.day, d.hour, d.minute,d.second]
).astype('float64') for d in datetimelist]
del(datetimelist)
e3t = data.e3t.values
e3t = mung_array(e3t, '3D')
e3t = e3t*mask
metadata = {
'FillValue' : np.array([0.]),
'Units' : b'?C'
}
e3t.shape
###Output
_____no_output_____ |
notebooks/1_hsv_values_ds_colors.ipynb | ###Markdown
Get general average HSV values for Daniel Smith cropped images
###Code
import pandas as pd
paths_df = pd.read_csv('/Users/macbook/Box/git_hub/Insight_Project_clean/data/paths_df.csv')
#create the lists to hold the averaged hsv values
h = []
s = []
v = []
import cv2
#uses cv2 to import the cropped images and calculate the mean of the whole image for each channel
for i in range(0,len(paths_df)):
image_path = paths_df.crop_path[i]
image = cv2.imread(image_path)
image_h_mean = cv2.cvtColor(image, cv2.COLOR_BGR2HSV).mean(axis=1)[:,0].mean()
image_s_mean = cv2.cvtColor(image, cv2.COLOR_BGR2HSV).mean(axis=1)[:,1].mean()
image_v_mean = cv2.cvtColor(image, cv2.COLOR_BGR2HSV).mean(axis=1)[:,2].mean()
h.append(image_h_mean)
s.append(image_s_mean)
v.append(image_v_mean)
#append the values to the dataframe
paths_df['h'] = h
paths_df['s'] = s
paths_df['v'] = v
###Output
_____no_output_____
###Markdown
Upload the complete pigment information df to SQL
###Code
from sqlalchemy import create_engine
from sqlalchemy_utils import database_exists, create_database
import psycopg2
import pandas as pd
import sys
sys.path.append('/Users/macbook/Box/git_hub/Insight_Project_clean/scripts/')
#import scripts.sql_con as sql
import sql_con
from sql_con import df_from_query
paths_df.to_sql('ds_data', engine, if_exists='replace')
sql_query = """SELECT * FROM ds_data
LIMIT 5"""
df = df_from_query(sql_query)
df
###Output
_____no_output_____
###Markdown
Generating the average color data for the clustering.To capture the variation in each swatch. I am taking the average for each row of pixels in the cropped swatch
###Code
ds_swatches = pd.DataFrame()
for i in range(0,len(paths_df)):
image_path = paths_df.crop_path[i]
image = cv2.imread(image_path)
image_mean = cv2.cvtColor(image, cv2.COLOR_BGR2HSV).mean(axis=0)
imported =pd.DataFrame(image_mean, columns=["h","s","v"])
imported["name"] = paths_df.name[i]
imported["label"] = paths_df.label[i]
ds_swatches = pd.concat([ds_swatches,imported], ignore_index=True)
ds_swatches.to_sql('ds_swatches', engine, if_exists='replace')
sql_query2 = """
SELECT *FROM ds_swatches LIMIT 10;
"""
color_data_from_sql = df_from_query(sql_query2)
color_data_from_sql
###Output
postgresql://macbook:DarwinRulez!1@localhost/colors
|
DNA classification using ML-NLP.ipynb | ###Markdown
DNA sequence data with Machiine Learning and Natural Language Processing Classification model that can predict a gene's function on the DNA sequence of the coding sequence alone.
###Code
#installing the necessary package
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#Human gene data
human_data = pd.read_table('C:\\Users\\mesho\\OneDrive\\Desktop\\DNA classification using ML-NLP\\dataset\\human_data.txt')
print(human_data.head())
print(human_data.shape)
print(human_data.columns)
print(human_data.isnull())
###Output
sequence class
0 False False
1 False False
2 False False
3 False False
4 False False
... ... ...
4375 False False
4376 False False
4377 False False
4378 False False
4379 False False
[4380 rows x 2 columns]
###Markdown
We have some data for human DNA sequence coding regions and a class label. These are mainly chinpanzee and Dogs
###Code
#Chimpanze data
chimp_data = pd.read_table('C:\\Users\\mesho\\OneDrive\\Desktop\\DNA classification using ML-NLP\\dataset\\chimp_data.txt')
print(chimp_data.head())
print(chimp_data.shape)
print(chimp_data.columns)
print(chimp_data.isnull())
#Dog_data
dog_data = pd.read_table('C:\\Users\\mesho\\OneDrive\\Desktop\\DNA classification using ML-NLP\\dataset\\dog_data.txt')
print(dog_data.head())
print(dog_data.shape)
print(dog_data.columns)
print(dog_data.isnull())
## Summary Stats for genes
def Summary_stats(data):
stats = data.describe(include = 'all')
print('The summary statastics of the data:', stats)
Summary_stats(human_data)
Summary_stats(chimp_data)
Summary_stats(dog_data)
sequence_len = []
def sequence_length(dataframe):
for sequence in dataframe.iteritems():
length = dataframe.sequence.str.len()
print(length)
human_len=sequence_length(human_data)
print(human_len)
###Output
0 207
1 681
2 1686
3 1206
4 1437
...
4375 57
4376 5883
4377 5817
4378 753
4379 459
Name: sequence, Length: 4380, dtype: int64
0 207
1 681
2 1686
3 1206
4 1437
...
4375 57
4376 5883
4377 5817
4378 753
4379 459
Name: sequence, Length: 4380, dtype: int64
None
###Markdown
The definition for each of 7 classes and how many are trainable for training data.
###Code
from IPython.display import Image
Image('C:\\Users\\mesho\\OneDrive\\Desktop\\DNA classification using ML-NLP\\img\\Class.PNG')
###Output
_____no_output_____
###Markdown
Treating DNA sequence as a "language", otherwise known as k-mer counting for applying NLP technique As we can see that the length of the sequence isn't uniform so there is not a uniform length of vector which is the requirement for feeding data to a classification or regression model. So, APPLYING K-MERS FUCTIONALITY OF NLP to break the DNA sequences in an uniform length k and use them as vectors.The method used here breaks the string into k-mer length (hexamers or octamers).Eg. 'ATGGGCAGCGCCAGCCCCGGCCTGAGCAGCGTGTCCCCCAGCCG' ->> (hexamers) 'ATGGGC' , 'AGCGCC', 'AGCCCC' and so on. Or. (octamers) 'ATGGGCAG', 'CGCCAGCC', 'CCGGCCTG' and so on.These will then be converted into fixed set of vectors using Natural Language Processing technique. Function to converting the sequence into the uniform length (taking octamers in this case). We are applying k-mers to complete sequences.
###Code
## Function to create kmers words of length 6 (hexamers)
def build_kmers(sequence, size=6):
return [sequence[x:x+size].upper() for x in range(len(sequence) - size +1)]
###Output
_____no_output_____
###Markdown
Now we can convert training data sequences into short overlapping k-mers from sequence string
###Code
#Adding a k_mers column to respective dataframes
human_data['k_mers'] = human_data.apply(lambda x: build_kmers(x['sequence']), axis=1)
chimp_data['k_mers'] = chimp_data.apply(lambda x: build_kmers(x['sequence']), axis=1)
dog_data['k_mers'] = dog_data.apply(lambda x: build_kmers(x['sequence']), axis=1)
human_data.head
chimp_data
dog_data
human_data.drop('sequence',inplace=True, axis=1)
print(human_data)
chimp_data.drop('sequence',inplace=True, axis=1)
dog_data.drop('sequence',inplace=True, axis=1)
print(chimp_data,dog_data)
###Output
class k_mers
0 4 [ATGCCC, TGCCCC, GCCCCA, CCCCAA, CCCAAC, CCAAC...
1 4 [ATGAAC, TGAACG, GAACGA, AACGAA, ACGAAA, CGAAA...
2 4 [ATGGCC, TGGCCT, GGCCTC, GCCTCG, CCTCGC, CTCGC...
3 4 [ATGGCC, TGGCCT, GGCCTC, GCCTCG, CCTCGC, CTCGC...
4 6 [ATGGGC, TGGGCA, GGGCAG, GGCAGC, GCAGCG, CAGCG...
... ... ...
1677 5 [ATGCTG, TGCTGA, GCTGAG, CTGAGC, TGAGCG, GAGCG...
1678 5 [ATGCTG, TGCTGA, GCTGAG, CTGAGC, TGAGCG, GAGCG...
1679 6 [ATGAAG, TGAAGC, GAAGCG, AAGCGA, AGCGAC, GCGAC...
1680 3 [ATGACT, TGACTG, GACTGG, ACTGGA, CTGGAA, TGGAA...
1681 3 [ATGTTG, TGTTGC, GTTGCC, TTGCCC, TGCCCA, GCCCA...
[1682 rows x 2 columns] class k_mers
0 4 [ATGCCA, TGCCAC, GCCACA, CCACAG, CACAGC, ACAGC...
1 4 [ATGAAC, TGAACG, GAACGA, AACGAA, ACGAAA, CGAAA...
2 6 [ATGGAA, TGGAAA, GGAAAC, GAAACA, AAACAC, AACAC...
3 6 [ATGTGC, TGTGCA, GTGCAC, TGCACT, GCACTA, CACTA...
4 0 [ATGAGC, TGAGCC, GAGCCG, AGCCGG, GCCGGC, CCGGC...
.. ... ...
815 5 [ATGGTC, TGGTCG, GGTCGG, GTCGGT, TCGGTC, CGGTC...
816 6 [ATGGCG, TGGCGG, GGCGGC, GCGGCG, CGGCGA, GGCGA...
817 6 [ATGAGC, TGAGCT, GAGCTC, AGCTCG, GCTCGG, CTCGG...
818 1 [GCCCCG, CCCCGA, CCCGAG, CCGAGG, CGAGGA, GAGGA...
819 6 [ATGGCC, TGGCCT, GGCCTG, GCCTGG, CCTGGG, CTGGG...
[820 rows x 2 columns]
###Markdown
Now we are converting the strings into list of strings so that we can apply NLP techniques to convert them into verctors.
###Code
human_texts = list(human_data['k_mers'])
for item in range(len(human_texts)):
human_texts[item] = ' '.join(human_texts[item])
y_data = human_data.iloc[:,0].values
print(human_texts[0])
print(len(y_data))
#Similarly applying the same for chimpanzee and dog as well
chimp_texts = list(chimp_data['k_mers'])
for item in range(len(chimp_texts)):
chimp_texts[item] = ' '.join(chimp_texts[item])
y_chimp = chimp_data.iloc[:,0].values
dog_texts = list(dog_data['k_mers'])
for item in range(len(dog_texts)):
dog_texts[item] = ' '.join(dog_texts[item])
y_dog = dog_data.iloc[:,0].values
print(len(y_chimp))
print(len(y_data))
###Output
1682
4380
###Markdown
Applying BoW (Bag of Words using CountVectorizer using Natural Language Processing)
###Code
#Creating bag of words using count vectorier
#This is equal to k-mer counting
#The n-gram size of 3
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(ngram_range=(3,3))
X = cv.fit_transform(human_texts)
X_chimp = cv.transform(chimp_texts)
X_dog = cv.transform(dog_texts)
print(X.shape)
print(X_chimp.shape)
print(X_dog.shape)
human_data['class'].value_counts().sort_index().plot.bar()
plt.xlabel('Class')
plt.ylabel('Counts (in nos.)')
plt.show()
###Output
_____no_output_____
###Markdown
Using train_test_split on the human dataset to split into testing and training data
###Code
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X, y_data, test_size=0.20, random_state=42)
print(X_train.shape)
print(X_test.shape)
###Output
(3504, 65447)
(876, 65447)
###Markdown
Applying Multinomial naive Bayers classifier. Here n-grams size of 4 is used and a model alpha of 0.2 is used!
###Code
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB(alpha=0.2, class_prior=None, fit_prior=True)
classifier.fit(X_train,y_train)
y_pred = classifier.predict(X_test)
###Output
_____no_output_____
###Markdown
Calculating some some model performce metrics like the confusion matrix, accuracy, precision, recall and f1 score.
###Code
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
print("Confusion matrix\n")
print(pd.crosstab(pd.Series(y_test, name='Actual'), pd.Series(y_pred, name='Predicted')))
def get_metrics(y_test, y_predicted):
accuracy = accuracy_score(y_test, y_predicted)
precision = precision_score(y_test, y_predicted, average='weighted')
recall = recall_score(y_test, y_predicted, average='weighted')
f1 = f1_score(y_test, y_predicted, average='weighted')
return accuracy, precision, recall, f1
accuracy, precision, recall, f1 = get_metrics(y_test, y_pred)
print("accuracy = %.3f \nprecision = %.3f \nrecall = %.3f \nf1 = %.3f" % (accuracy, precision, recall, f1))
###Output
Confusion matrix
Predicted 0 1 2 3 4 5 6
Actual
0 97 0 0 0 3 0 2
1 1 93 0 0 2 0 10
2 0 0 77 0 0 0 1
3 0 0 0 121 0 0 4
4 2 0 0 0 142 0 5
5 0 0 0 0 0 48 3
6 0 0 0 2 1 1 261
accuracy = 0.958
precision = 0.960
recall = 0.958
f1 = 0.958
###Markdown
Calculating the same for ngrams of size of 4 and a model alpha of 0.2 is used!
###Code
#Creating bag of words using count vectorier
#This is equal to k-mer counting
#The n-gram size of 4
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(ngram_range=(4,4))
X = cv.fit_transform(human_texts)
X_chimp = cv.fit_transform(chimp_texts)
X_dog = cv.fit_transform(dog_texts)
print(X.shape)
print(X_chimp.shape)
print(X_dog.shape)
human_data['class'].value_counts().sort_index().plot.bar()
plt.xlabel('Class')
plt.ylabel('Counts (in nos.)')
plt.show()
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X, y_data, test_size=0.20, random_state=42)
print(X_train.shape)
print(X_test.shape)
###Output
(3504, 232414)
(876, 232414)
###Markdown
Applying Multinomial naive Bayers classifier. Here n-grams size of 4 is used with and a model alpha of 0.2 is used!
###Code
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB(alpha=0.2, class_prior=None, fit_prior=True)
classifier.fit(X_train,y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
print("Confusion matrix\n")
print(pd.crosstab(pd.Series(y_test, name='Actual'), pd.Series(y_pred, name='Predicted')))
def get_metrics(y_test, y_predicted):
accuracy = accuracy_score(y_test, y_predicted)
precision = precision_score(y_test, y_predicted, average='weighted')
recall = recall_score(y_test, y_predicted, average='weighted')
f1 = f1_score(y_test, y_predicted, average='weighted')
return accuracy, precision, recall, f1
accuracy, precision, recall, f1 = get_metrics(y_test, y_pred)
print("accuracy = %.3f \nprecision = %.3f \nrecall = %.3f \nf1 = %.3f" % (accuracy, precision, recall, f1))
###Output
Confusion matrix
Predicted 0 1 2 3 4 5 6
Actual
0 100 0 0 0 1 0 1
1 0 104 0 0 0 0 2
2 0 0 78 0 0 0 0
3 0 0 0 124 1 0 0
4 1 0 0 0 145 0 3
5 0 0 0 0 0 51 0
6 1 0 0 1 0 0 263
accuracy = 0.987
precision = 0.988
recall = 0.987
f1 = 0.987
###Markdown
We achieve highest possible f-1 score of 98.7% in human dataset when we kept alpha of the model 0.2 and ngram equal to 4 Applying Multinomial naive Bayers classifier. Here n-grams size of 4 is used with a model alpha of 0.2 is used for Chimpanzee dataset
###Code
chimp_data['class'].value_counts().sort_index().plot.bar()
plt.xlabel('Class')
plt.ylabel('Counts (in nos.)')
plt.show()
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X_chimp, y_chimp, test_size=0.20, random_state=42)
print(X_train.shape)
print(X_test.shape)
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB(alpha=0.2, class_prior=None, fit_prior=True)
classifier.fit(X_train,y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
print("Confusion matrix\n")
print(pd.crosstab(pd.Series(y_test, name='Actual'), pd.Series(y_pred, name='Predicted')))
def get_metrics(y_test, y_predicted):
accuracy = accuracy_score(y_test, y_predicted)
precision = precision_score(y_test, y_predicted, average='weighted')
recall = recall_score(y_test, y_predicted, average='weighted')
f1 = f1_score(y_test, y_predicted, average='weighted')
return accuracy, precision, recall, f1
accuracy, precision, recall, f1 = get_metrics(y_test, y_pred)
print("accuracy = %.3f \nprecision = %.3f \nrecall = %.3f \nf1 = %.3f" % (accuracy, precision, recall, f1))
###Output
Confusion matrix
Predicted 0 1 2 3 4 5 6
Actual
0 27 0 0 1 0 0 0
1 0 38 0 1 0 0 0
2 0 0 26 0 0 0 1
3 0 0 0 41 1 0 1
4 0 1 0 5 42 0 4
5 3 0 0 0 3 19 4
6 0 0 0 2 0 0 117
accuracy = 0.920
precision = 0.925
recall = 0.920
f1 = 0.918
###Markdown
We achieve highest possible f-1 score in case of Chimpanzee of 91.8% when we kept alpha of the model 0.2 and ngram equal to 4 Applying Multinomial naive Bayers classifier. Here n-grams size of 4 is used and a model alpha of 0.2 is used for Dog dataset.
###Code
dog_data['class'].value_counts().sort_index().plot.bar()
plt.xlabel('Class')
plt.ylabel('Counts (in nos.)')
plt.show()
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X_dog, y_dog, test_size=0.20, random_state=42)
print(X_train.shape)
print(X_test.shape)
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB(alpha=0.2, class_prior=None, fit_prior=True)
classifier.fit(X_train,y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
print("Confusion matrix\n")
print(pd.crosstab(pd.Series(y_test, name='Actual'), pd.Series(y_pred, name='Predicted')))
def get_metrics(y_test, y_predicted):
accuracy = accuracy_score(y_test, y_predicted)
precision = precision_score(y_test, y_predicted, average='weighted')
recall = recall_score(y_test, y_predicted, average='weighted')
f1 = f1_score(y_test, y_predicted, average='weighted')
return accuracy, precision, recall, f1
accuracy, precision, recall, f1 = get_metrics(y_test, y_pred)
print("accuracy = %.3f \nprecision = %.3f \nrecall = %.3f \nf1 = %.3f" % (accuracy, precision, recall, f1))
###Output
Confusion matrix
Predicted 0 1 2 3 4 5 6
Actual
0 19 0 0 0 0 2 6
1 0 15 1 1 0 0 2
2 1 0 10 0 0 0 3
3 2 0 0 10 0 0 4
4 4 0 0 4 8 0 7
5 3 0 0 0 0 7 3
6 1 0 0 3 2 0 46
accuracy = 0.701
precision = 0.731
recall = 0.701
f1 = 0.693
|
notebooks/11_temporal_probability_models/index3.ipynb | ###Markdown
Robot LocalizationIn robot localization, we know the map, but not the robot’s position. An example of observations would be vectors of range finder readings, this means our agent has a couple of sensors, each reporting the distance in a specific direction with an obstacle. State space and readings are typically continuous (works basically like a very fine grid) and so we cannot store $B(X)$. Due to this property of problem, particle filtering is a main technique.So, we use many particles, uniformly distributed in the map. Then, after each iteration, we become reluctant to those of them that do not have probable readings. As a result, trusting that map would have been different to the eyes of our particles, we would end up with our particles centered at the real position.The below depiction shows this perfectly. The red dots represent particles. Notice how the algorithm can't decide between two positions until entering a room.What algorithm do you think would be better to drive the agent with, so that we can find and benefit from asymmetries in the map? (Think about random walks)  We can even even go a step further, and forget about the map. This problem is called **Simultaneous Localization And Mapping** or **SLAM** for short. In this version of problem, we neither do know where the agent is, nor know what the map is. We have to find them both.To solve this problem, we extend our states to also cover the map. For example, we can show our map with a matrix of 1s and 0s where every element is 1 if the map is blocked in the corresponding region on the map.To solve this problem we use Kalman filtering and particle methods.Notice how the robot starts with complete certainty about its position, and as the time goes on, it doubts if the position indeed is probable if it was a little bit away from its current position (like the readings would have been close to what they are now) and this leads to uncertainty even about the position. When the agent reachs a full cycle, it understands that it should be at the same position now, so its certainty about its position rises once again. Dynamic Bayes NetDynamic Bayesian Networks (**DBN**) extend standard Bayesian networks with the concept of time. This allows us to model time series or sequences. In fact they can model complex multivariate time series, which means we can model the relationships between multiple time series in the same model, and also different regimes of behavior, since time series often behave differently in different contexts. DBN Particle FiltersA particle is a complete sample for a time step. This is similar to reqgular filtering where we have to use sampling methods introduced earlier in the course instead of just a distribution.Below are the steps we have to follow:* InitializeGenerate prior samples for the $t=1$ Bayes net. e.g. particle $G_1^a = (3,3) G_1^b = (5,3)$ for above image.* Elapse timeSample a successor for each particle. e.g. successor $G_2^a = (2,3) G_2^b = (6,3)$* ObserveWeight each entire sample by the likelihood of the evidence conditioned on the sample.Likelihood $p(E_1^a |G_1^a) \times p(E_1^b |G_1^b)$ * ResampleSelect prior samples (tuples of values) in proportion to their likelihood. Most Likely ExplanationWe are introducing a new query, we can ask our temporal model. The query statement is as follows: What is the most likely path of states that would have produced the current result.Or more formally if our states are $X_i$ our observations are $E_i$, we want to find$$argmax_{x_{1:t}} P(x_{1:t}|e_{1:t})$$But how can we answer this query?First, let's define the **state trellis**.State trellis is a directed weighted graph $G$ that its nodes are the states, and an arc between two states $u$, and $v$ represents a transition between these two states. The weight of this arc is defined by the probablity of this arc happening. More formally, assume we have a transition between $x_{t-1}$ and $x_t$. Then the weight of the arc between these two will be $P(x_{t}|x_{t-1}) \times P(e_t|x_t)$Note that with this definition, each path is a sequence of states, and the product of weights in this path is the probability of this path, provided the evidence. Viterbi's AlgorithmViterbi, uses dynamic programming model, to find the best path along the states. It first finds how probable a state at time $t-1$ is, and then reasons that the state at time $t$ relies solely on last state, and so having those probablities is enough to find the probability of new steps. Finally, the state that helps us find the most likely last state is it's parent.\begin{align*}m_t[x_t] &= max_{x_{1:t-1}} P(x_{1:t-1}, x_t, e_{1:t}) \\&= P(e_t|x_t)max_{x_{t-1}} P(x_t|x_{t-1})m_{t-1}[x_{t-1}]\end{align*}$$p_t[x_t] = argmax_{x_{t-1}} P(x_t|x_{t-1})m_{t-1}[x_{t-1}]$$ ExampleConsider a village where all villagers are either healthy or have a fever and only the village doctor can determine whether each has a fever. The doctor diagnoses fever by asking patients how they feel. The villagers may only answer that they feel normal, dizzy, or cold.The doctor believes that the health condition of his patients operates as a discrete Markov chain. There are two states, "Healthy" and "Fever", but the doctor cannot observe them directly; they are hidden from him. On each day, there is a certain chance that the patient will tell the doctor he is "normal", "cold", or "dizzy", depending on his health condition.The observations (normal, cold, dizzy) along with a hidden state (healthy, fever) form a hidden Markov model (HMM).In this piece of code, start_p represents the doctor's belief about which state the HMM is in when the patient first visits (all he knows is that the patient tends to be healthy). The particular probability distribution used here is not the equilibrium one, which is (given the transition probabilities) approximately `{'Healthy': 0.57, 'Fever': 0.43}`. The transition_p represents the change of the health condition in the underlying Markov chain. In this example, there is only a 30% chance that tomorrow the patient will have a fever if he is healthy today. The emit_p represents how likely each possible observation, normal, cold, or dizzy is given their underlying condition, healthy or fever. If the patient is healthy, there is a 50% chance that he feels normal; if he has a fever, there is a 60% chance that he feels dizzy. The patient visits three days in a row and the doctor discovers that on the first day he feels normal, on the second day he feels cold, on the third day he feels dizzy. The doctor has a question: what is the most likely sequence of health conditions of the patient that would explain these observations?
###Code
obs = ("normal", "cold", "dizzy")
states = ("Healthy", "Fever")
start_p = {"Healthy": 0.6, "Fever": 0.4}
trans_p = {
"Healthy": {"Healthy": 0.7, "Fever": 0.3},
"Fever": {"Healthy": 0.4, "Fever": 0.6},
}
emit_p = {
"Healthy": {"normal": 0.5, "cold": 0.4, "dizzy": 0.1},
"Fever": {"normal": 0.1, "cold": 0.3, "dizzy": 0.6},
}
def viterbi(obs, states, start_p, trans_p, emit_p):
V = [{}]
for st in states:
V[0][st] = {"prob": start_p[st] * emit_p[st][obs[0]], "prev": None}
# Run Viterbi when t > 0
for t in range(1, len(obs)):
V.append({})
for st in states:
max_tr_prob = V[t - 1][states[0]]["prob"] * trans_p[states[0]][st]
prev_st_selected = states[0]
for prev_st in states[1:]:
tr_prob = V[t - 1][prev_st]["prob"] * trans_p[prev_st][st]
if tr_prob > max_tr_prob:
max_tr_prob = tr_prob
prev_st_selected = prev_st
max_prob = max_tr_prob * emit_p[st][obs[t]]
V[t][st] = {"prob": max_prob, "prev": prev_st_selected}
for line in dptable(V):
print(line)
opt = []
max_prob = 0.0
best_st = None
# Get most probable state and its backtrack
for st, data in V[-1].items():
if data["prob"] > max_prob:
max_prob = data["prob"]
best_st = st
opt.append(best_st)
previous = best_st
# Follow the backtrack till the first observation
for t in range(len(V) - 2, -1, -1):
opt.insert(0, V[t + 1][previous]["prev"])
previous = V[t + 1][previous]["prev"]
print ("The steps of states are " + " ".join(opt) + " with highest probability of %s" % max_prob)
def dptable(V):
# Print a table of steps from dictionary
yield " " * 5 + " ".join(("%3d" % i) for i in range(len(V)))
for state in V[0]:
yield "%.7s: " % state + " ".join("%.7s" % ("%lf" % v[state]["prob"]) for v in V)
viterbi(obs, states, start_p, trans_p, emit_p)
###Output
0 1 2
Healthy: 0.30000 0.08400 0.00588
Fever: 0.04000 0.02700 0.01512
The steps of states are Healthy Healthy Fever with highest probability of 0.01512
|
data-science/metrics/MetricsNN_MAE.ipynb | ###Markdown
###Code
# Imports
import pandas as pd
import numpy as np
from sklearn.metrics import mean_absolute_error
# Load the raw data
w1_results_df = pd.read_csv('https://raw.githubusercontent.com/JimKing100/NFL-Live/master/data-science/data/rnn-combined/predictions-week1.csv')
#### The week 1 predictions
# week1-cur = 2018 total points
# week1-pred = predicted points for the season
# week1-act = actual points for the season
# weekn-cur = week (n-1) actual points
# weekn-pred = predicted points for the rest of the season (n-17)
# weekn-act = actual points for the rest of the season (n-17)
w1_results_df.head()
# Calculate the MAE for predicted points vs. actual points
# Calculate the MAE for current points using the average of previous weeks
column_names = ['week', 'nn', 'average']
metric_df = pd.DataFrame(columns = column_names)
for i in range(1, 18):
filename = 'https://raw.githubusercontent.com/JimKing100/NFL-Live/master/data-science/data/rnn-combined/predictions-week' + str(i) + '.csv'
# Column names
week_cur = 'week' + str(i) + '-cur'
week_pred = 'week' + str(i) + '-pred'
week_act = 'week' + str(i) + '-act'
# Weekly predictions
results_df = pd.read_csv(filename)
# Create the current points list using 2018 points in week 1 and average points going forward
if i == 1:
week_current = results_df['week1-cur'].tolist()
else:
# for each player (element) calculate the average points (element/(i-1)) and multiply by remaining games (17-(i-1))
# the 17th week is 0 and represents the bye week (17 weeks and 16 games)
week_list = results_df[week_cur].tolist()
week_current = [((element / (i -1)) * (17 - (i -1))) for element in week_list]
# Creat the prediction and actual lists
week_pred = results_df[week_pred].tolist()
week_act = results_df[week_act].tolist()
# Calculate the MAE for predicted vs. actual
week_pa_mae = mean_absolute_error(week_act, week_pred)
print('MAE predicted vs actual week {0:2d} {1:3.2f}'.format(i, week_pa_mae))
# Calculate the MAE for current vs. actual
week_ca_mae = mean_absolute_error(week_act, week_current)
print('MAE current vs actual week {0:2d} {1:3.2f}'.format(i, week_ca_mae), '\n')
metric_df = metric_df.append({'week': i, 'nn': week_pa_mae, 'average': week_ca_mae}, ignore_index=True)
file_name = '/content/nn_metrics.csv'
metric_df.to_csv(file_name, index=False)
###Output
MAE predicted vs actual week 1 28.69
MAE current vs actual week 1 36.57
MAE predicted vs actual week 2 27.83
MAE current vs actual week 2 51.02
MAE predicted vs actual week 3 24.86
MAE current vs actual week 3 38.27
MAE predicted vs actual week 4 22.99
MAE current vs actual week 4 33.01
MAE predicted vs actual week 5 21.49
MAE current vs actual week 5 29.42
MAE predicted vs actual week 6 20.25
MAE current vs actual week 6 27.21
MAE predicted vs actual week 7 19.39
MAE current vs actual week 7 24.31
MAE predicted vs actual week 8 18.00
MAE current vs actual week 8 21.69
MAE predicted vs actual week 9 16.28
MAE current vs actual week 9 19.55
MAE predicted vs actual week 10 15.04
MAE current vs actual week 10 17.19
MAE predicted vs actual week 11 13.30
MAE current vs actual week 11 14.80
MAE predicted vs actual week 12 11.76
MAE current vs actual week 12 12.71
MAE predicted vs actual week 13 10.44
MAE current vs actual week 13 10.93
MAE predicted vs actual week 14 8.57
MAE current vs actual week 14 8.92
MAE predicted vs actual week 15 6.75
MAE current vs actual week 15 7.11
MAE predicted vs actual week 16 5.06
MAE current vs actual week 16 5.35
MAE predicted vs actual week 17 2.94
MAE current vs actual week 17 3.21
|
dd_1/Part 4/Section 02 - Classes/11 - Read-Only and Computed Properties.ipynb | ###Markdown
Read-Only and Computed Properties Although write-only properties are not that common, read-only properties (i.e. that define a getter but not a setter) are quite common for a number of things. Of course, we can create read-only properties, but since nothing is private, at best we are "suggesting" to the users of our class they should treat the property as read-only. There's always a way to hack around that of course.But still, it's good to be able to at least explicitly indicate to a user that a property is meant to be read-only. The use case I'm going to focus on in this video, is one of computed properties. Those are properties that may not actually have a backing variable, but are instead calculated on the fly. Consider this simple example of a `Circle` class where we can read/write the radius of the circle, but want a computed property for the area. We don't need to store the area value, we can alway calculate it given the current radius value.
###Code
from math import pi
class Circle:
def __init__(self, radius):
self.radius = radius
@property
def area(self):
print('calculating area...')
return pi * (self.radius ** 2)
c = Circle(1)
c.area
###Output
calculating area...
###Markdown
We could certainly just use a class method `area()`, but the area is more a property of the circle, so it makes more sense to just retrive it as a property, without the extra `()` to make the call. The advantage of how we did this is that shoudl the radius of the circle ever change, the area property will immediately reflect that.
###Code
c.radius = 2
c.area
###Output
calculating area...
###Markdown
On the other hand, it's also a weakness - every time we need the area of the circle, it gets recalculated, even if the radius has not changed!
###Code
c.area
c.area
###Output
calculating area...
calculating area...
###Markdown
So now we can use properties to fix this problem without breaking our interface!We are going to cache the area value, and only-recalculate it if the radius has changed.In order for us to know if the radius has changed, we are going to make it into a property, and the setter will keep track of whether the radius is set, in which case it will invalidate the cached area value.
###Code
class Circle:
def __init__(self, radius):
self.radius = radius
self._area = None
@property
def radius(self):
return self._radius
@radius.setter
def radius(self, value):
# if radius value is set we invalidate our cached _area value
# we could make this more intelligent and see if the radius has actually changed
# but keeping it simple
self._area = None
# we could even add validation here, like value has to be numeric, non-negative, etc
self._radius = value
@property
def area(self):
if self._area is None:
# value not cached - calculate it
print('Calculating area...')
self._area = pi * (self.radius ** 2)
return self._area
c = Circle(1)
c.area
c.area
c.radius = 2
c.area
c.area
###Output
_____no_output_____
###Markdown
There are a lot of other uses for calculate properties.Some properties may even do a lot work, like retrieving data from a database, making a call to some external API, and so on. Example Let's write a class that takes a URL, downloads the web page for that URL and provides us some metrics on that URL - like how long it took to download, the size (in bytes) of the page. Although I am going to use the `urllib` module for this, I strongly recommend you use the `requests` 3rd party library instead: http://docs.python-requests.org
###Code
import urllib
from time import perf_counter
class WebPage:
def __init__(self, url):
self.url = url
self._page = None
self._load_time_secs = None
self._page_size = None
@property
def url(self):
return self._url
@url.setter
def url(self, value):
self._url = value
self._page = None
# we'll lazy load the page - i.e. we wait until some property is requested
@property
def page(self):
if self._page is None:
self.download_page()
return self._page
@property
def page_size(self):
if self._page is None:
# need to first download the page
self.download_page()
return self._page_size
@property
def time_elapsed(self):
if self._page is None:
self.download_page()
return self._load_time_secs
def download_page(self):
self._page_size = None
self._load_time_secs = None
start_time = perf_counter()
with urllib.request.urlopen(self.url) as f:
self._page = f.read()
end_time = perf_counter()
self._page_size = len(self._page)
self._load_time_secs = end_time - start_time
urls = [
'https://www.google.com',
'https://www.python.org',
'https://www.yahoo.com'
]
for url in urls:
page = WebPage(url)
print(f'{url} \tsize={format(page.page_size, "_")} \telapsed={page.time_elapsed:.2f} secs')
###Output
https://www.google.com size=11_489 elapsed=0.20 secs
https://www.python.org size=49_132 elapsed=0.18 secs
https://www.yahoo.com size=524_548 elapsed=0.77 secs
|
Notebooks/NewExperiments/NExperiment_6_NNOOA.ipynb | ###Markdown
Tutorial with 1d advection equationCode pipeline from the PNAS 2020 paper by Jiawei Zhuang et al.
###Code
# %%capture
# !pip install -U numpy==1.18.5
# !pip install h5py==2.10.0
'Comment above cell and restart and run all before'
'Check numpys version BEFORE and AFTER runtime restart'
import numpy as np
import matplotlib.pyplot as plt
print(np.__version__)
###Output
1.18.5
###Markdown
Setup
###Code
%%capture
!git clone https://github.com/aditya5252/Multiprocessor_Advection_.git
!pip install git+https://github.com/JiaweiZhuang/data-driven-pdes@fix-beam
%tensorflow_version 1.x
import os
import matplotlib.pyplot as plt
import numpy as np
from numpy.random import choice
import pandas as pd
import tensorflow as tf
tf.enable_eager_execution()
%matplotlib inline
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['font.size'] = 14
from google.colab import files # colab-specific utilities; comment out when running locally
tf.enable_eager_execution()
tf.__version__, tf.keras.__version__
import xarray
from datadrivenpdes.core import grids
from datadrivenpdes.core import integrate
from datadrivenpdes.core import models
from datadrivenpdes.core import tensor_ops
from datadrivenpdes.advection import equations as advection_equations
from datadrivenpdes.pipelines import model_utils
# tf.keras.backend.set_floatx('float32')
'Find dt for Advection-1d equation'
def _dx_dt(data,adv_coff):
dx=2*np.pi/(data.shape[1])
return dx,dx*0.08/adv_coff
'Plot time propagation of dataset'
def plot_time_prop(data,t0,t1,t2):
plt.plot(data[t0],label=f'Max_{t0}={data[t0].max()}')
plt.plot(data[t1],label=f'Max_{t1}={data[t1].max()}')
plt.plot(data[t2],label=f'Max_{t2}={data[t2].max()}')
plt.legend()
'Create initial_state dictionary from dataset'
def create_init_state_from_2d_data(data,adv_coff):
c_init=data[0][np.newaxis,:,np.newaxis]
initial_state_obj = {
'concentration': c_init.astype(np.float32), # tensorflow code expects float32
'x_velocity': adv_coff*np.ones(c_init.shape, np.float32) * 1.0,
'y_velocity': np.zeros(c_init.shape, np.float32)
}
for k, v in initial_state_obj.items():
print(k, v.shape) # (sample, x, y)
return initial_state_obj
'Create xarray DatArray from integrated dictionary'
def wrap_as_xarray(integrated):
dr = xarray.DataArray(
integrated['concentration'].numpy().squeeze(-1),
dims = ('time', 'sample', 'x'),
coords = {'time': time_steps, 'x': x_coarse.squeeze()}
)
return dr
def delay_(max_delay,prob_dist):
allowed_delays=np.arange(0.,max_delay)
delay_chosen=choice(allowed_delays,p=prob_dist)
return delay_chosen
def modify_data(sub_data,DAsync=None):
one_arr=np.ones_like(sub_data)
boundary_arr=np.zeros_like(sub_data)
boundary_arr[:,0]=1.
boundary_arr[:,-1]=1.
if (DAsync==0):
delay_arr=np.zeros_like(sub_data)
elif (DAsync==1):
delay_arr=np.zeros_like(sub_data)
for i in range(delay_arr.shape[0]):
delay_arr[i,0]=delay_(nlevels,prob_set)
delay_arr[i,-1]=delay_(nlevels,prob_set)
del_arr = delay_arr + boundary_arr + one_arr
sub_data_modified=np.multiply(del_arr,sub_data)
return sub_data_modified
# This data-generation code is a bit involved, mostly because we use multi-step loss function.
# To produce large training data in parallel, refer to the create_training_data.py script in source code.
def reference_solution(initial_state_fine, fine_grid, coarse_grid,
coarse_time_steps=256):
'What does this function do'
'Runs high-accuracy model at high-resolution'
'smaller dx, => More Nx => More Nt'
'Subsample with subsampling_factor=Resamplingfactor '
'High accuracy data achieved on a coarse grid'
'So essentially obtain coarse-grained, HIGH-ACCURACY, GROUND TRUTH data'
'Return dict of items'
'For my simple use-case , Resamplingfactor = 1 '
'Hence, given sync_data dataset(128 x 32)'
'sync_data dataset itself is taken as the ground truth'
'Hence we do not need this function to obtain Ground truth data '
# use high-order traditional scheme as reference model
equation = advection_equations.VanLeerAdvection(cfl_safety_factor=0.08)
key_defs = equation.key_definitions
# reference model runs at high resolution
model = models.FiniteDifferenceModel(equation, fine_grid)
# need 8x more time steps for 8x higher resolution to satisfy CFL
coarse_ratio = fine_grid.size_x // coarse_grid.size_x
steps = np.arange(0, coarse_time_steps*coarse_ratio+1, coarse_ratio)
# solve advection at high resolution
integrated_fine = integrate.integrate_steps(model, initial_state_fine, steps)
# regrid to coarse resolution
integrated_coarse = tensor_ops.regrid(
integrated_fine, key_defs, fine_grid, coarse_grid)
return integrated_coarse
def ground_dict_from_data(data):
conc_ground=tf.convert_to_tensor(data[:,np.newaxis,:,np.newaxis], dtype=tf.float32, dtype_hint=None, name=None)
ground_soln_dict = {
'concentration': conc_ground, # tensorflow code expects float32
'x_velocity': tf.ones_like(conc_ground, dtype=None, name=None) * 1.0,
'y_velocity': tf.zeros_like(conc_ground, dtype=None, name=None)
}
for k, v in ground_soln_dict.items():
print(k, v.shape) # (sample, x, y)
return ground_soln_dict
def make_train_data(integrated_coarse, coarse_time_steps=256, example_time_steps=4):
# we need to re-format data so that single-step input maps to multi-step output
# remove the last several time steps, as training input
train_input = {k: v[:-example_time_steps] for k, v in integrated_coarse.items()}
# merge time and sample dimension as required by model
n_time, n_sample, n_x, n_y = train_input['concentration'].shape
for k in train_input:
train_input[k] = tf.reshape(train_input[k], [n_sample * n_time, n_x, n_y])
print('\n train_input shape:')
for k, v in train_input.items():
print(k, v.shape) # (merged_sample, x, y)
# pick the shifted time series, as training output
output_list = []
for shift in range(1, example_time_steps+1):
# output time series, starting from each single time step
output_slice = integrated_coarse['concentration'][shift:coarse_time_steps - example_time_steps + shift + 1]
# merge time and sample dimension as required by training
n_time, n_sample, n_x, n_y = output_slice.shape
output_slice = tf.reshape(output_slice, [n_sample * n_time, n_x, n_y])
output_list.append(output_slice)
train_output = tf.stack(output_list, axis=1) # concat along shift_time dimension, after sample dimension
print('\n train_output shape:', train_output.shape) # (merged_sample, shift_time, x, y)
# sanity check on shapes
assert train_output.shape[0] == train_input['concentration'].shape[0] # merged_sample
assert train_output.shape[2] == train_input['concentration'].shape[1] # x
assert train_output.shape[3] == train_input['concentration'].shape[2] # y
assert train_output.shape[1] == example_time_steps
return train_input, train_output
###Output
_____no_output_____
###Markdown
Define Grids & Get Data from Analytical Solution
###Code
err_ls=[]
# we mostly run simulation on coarse grid
# the fine grid is only for obtaining training data and generate the reference "truth"
for ord in range(4,8):
res=2**ord
numPE=1
grid_length = 2*np.pi
fine_grid_resolution = res
# 1d domain, so only 1 point along y dimension
fine_grid = grids.Grid(
size_x=fine_grid_resolution, size_y=1,
step=grid_length/fine_grid_resolution
)
x_fine, _ = fine_grid.get_mesh()
print(x_fine.shape)
#Data on 1000 time-steps
init_values=np.sin(x_fine)
CFL=0.08
u0=1.
dx=grid_length/len(x_fine)
dt=dx*CFL/u0
tend=10.
N_t=int(tend//dt)
data_ls=[np.sin(x_fine-u0*dt*n) for n in range(N_t)]
data_ana=np.stack(data_ls)
'Create initial state from data'
data_ana=np.squeeze(data_ana)
initial_state=create_init_state_from_2d_data(data_ana,u0)
model_nn = models.PseudoLinearModel(
advection_equations.FiniteDifferenceAdvection(0.08),
fine_grid,
num_time_steps=4, # multi-step loss function
stencil_size=3, kernel_size=(3, 1), num_layers=4, filters=32,
constrained_accuracy_order=1,
learned_keys = {'concentration_x', 'concentration_y'}, # finite volume view, use edge concentration
activation='relu',)
print(advection_equations.FiniteDifferenceAdvection(0.08).get_time_step(fine_grid,u0) == dt)
tf.random.set_random_seed(14)
time_steps=np.arange(N_t)
%time integrated_untrained = integrate.integrate_steps(model_nn, initial_state, time_steps)
plot_time_prop(integrated_untrained['concentration'].numpy().squeeze(),0,N_t//2,N_t-1)
plt.title('Untrained Model Predictions')
plt.show()
ground_soln_dict=ground_dict_from_data(data_ana)
train_input, train_output = make_train_data(ground_soln_dict,data_ana.shape[0]-1, 4)
%%time
# same as training standard Keras model
model_nn.compile(
optimizer='adam', loss='mae'
)
# tf.random.set_random_seed(42)
# np.random.seed(42)
history = model_nn.fit(
train_input, train_output, epochs=20, batch_size=64,
verbose=0, shuffle=True
)
df_history = pd.DataFrame(history.history)
df_history.plot(marker='.')
plt.show()
time_steps=np.arange(N_t)
%time integrated_trained = integrate.integrate_steps(model_nn, initial_state, time_steps)
plot_time_prop(integrated_trained['concentration'].numpy().squeeze(),0,N_t//2,N_t-1)
plt.title('Trained Model Predictions')
plt.show()
erAr=integrated_trained['concentration'].numpy().squeeze()[N_t-1]-data_ana[N_t-1]
err_=np.mean(np.abs(erAr))
err_ls.append(err_)
###Output
(16, 1)
concentration (1, 16, 1)
x_velocity (1, 16, 1)
y_velocity (1, 16, 1)
True
CPU times: user 4.75 s, sys: 16.2 ms, total: 4.77 s
Wall time: 7.21 s
###Markdown
Calculate O.O.A
###Code
type(err_ls)
err_ls=np.array(err_ls)
ls=np.array([2**i for i in range(4,8)])
print(ls)
print(np.log(ls))
plt.plot(np.log(ls),-1*np.log(ls),'r')
plt.plot(np.log(ls),np.log(err_ls),'b')
plt.plot(np.log(ls),-2*np.log(ls),'r')
plt.plot(np.log(ls),np.log(err_ls),'b')
plt.plot(np.log(ls),-3*np.log(ls)+6,'r')
plt.plot(np.log(ls),np.log(err_ls),'b')
###Output
_____no_output_____ |
note/tutorial/quickstart.ipynb | ###Markdown
Quickstart Preliminaries Imports
###Code
import mercs
import numpy as np
from mercs.tests import load_iris, default_dataset
from mercs.core import Mercs
import pandas as pd
###Output
_____no_output_____
###Markdown
FitHere a small MERCS testdrive for what I suppose you'll need. First, let us generate a basic dataset. Some utility-functions are integrated in MERCS so that goes like this
###Code
train, test = default_dataset(n_features=3)
df = pd.DataFrame(train)
df.head()
df.describe()
###Output
_____no_output_____
###Markdown
Now let's train a MERCS model. To know what options you have, come talk to me or dig in the code. For induction, `nb_targets` and `nb_iterations` matter most. Number of targets speaks for itself, number of iterations manages the amount of trees _for each target_. With `n_jobs` you can do multi-core learning (with joblib, really basic, but works fine on single machine), that makes stuff faster. `fraction_missing` sets the amount of attributes that is missing for a tree. However, this parameter only has an effect if you use the `random` selection algorithm. The alternative is the `base` algorithm, which selects targets, and uses all the rest as input.
###Code
clf = Mercs(
max_depth=4,
selection_algorithm="random",
fraction_missing=0.6,
nb_targets=2,
nb_iterations=2,
n_jobs=1,
verbose=1,
inference_algorithm="own",
max_steps=8,
prediction_algorithm="it",
)
###Output
_____no_output_____
###Markdown
You have to specify the nominal attributes yourself. This determines whether a regressor or a classifier is learned for that target. MERCS takes care of grouping targets such that no mixed sets are created.
###Code
nominal_ids = {train.shape[1]-1}
nominal_ids
clf.fit(train, nominal_attributes=nominal_ids)
###Output
_____no_output_____
###Markdown
So, now we have learned trees with two targets, but only a single target was nominal. If MERCS worked well, it should have learned single-target classifiers (for attribute 4) and multi-target regressors for all other target sets.
###Code
for idx, m in enumerate(clf.m_list):
msg = """
Model with index: {}
{}
""".format(idx, m.model)
print(msg)
###Output
_____no_output_____
###Markdown
So, that looks good already. Let's examine up close.
###Code
clf.m_codes
###Output
_____no_output_____
###Markdown
That's the matrix that summarizes everything. This can be dense to parse, and there's alternatives to gain insights, for instance;
###Code
for m_idx, m in enumerate(clf.m_list):
msg = """
Tree with id: {}
has source attributes: {}
has target attributes: {},
and predicts {} attributes
""".format(m_idx, m.desc_ids, m.targ_ids, m.out_kind)
print(msg)
###Output
_____no_output_____
###Markdown
And that concludes my quick tour of how to fit with MERCS. PredictionFirst, we generate a query.
###Code
m = clf.m_list[0]
m
m.out_kind
clf.m_fimps
# Single target
q_code=np.zeros(clf.m_codes[0].shape[0], dtype=int)
q_code[-1:] = 1
print("Query code is: {}".format(q_code))
y_pred = clf.predict(test, q_code=q_code)
y_pred[:10]
clf.show_q_diagram()
# Multi-target
q_code=np.zeros(clf.m_codes[0].shape[0], dtype=int)
q_code[-2:] = 1
print("Query code is: {}".format(q_code))
y_pred = clf.predict(test, q_code=q_code)
y_pred[:10]
%debug
%debug
clf.show_q_diagram()
# Missing attributes
q_code=np.zeros(clf.m_codes[0].shape[0], dtype=int)
q_code[-1:] = 1
q_code[:2] = -1
print("Query code is: {}".format(q_code))
y_pred = clf.predict(test, q_code=q_code)
y_pred[:10]
clf.show_q_diagram()
###Output
_____no_output_____
###Markdown
Quickstartmissmercs quickstart guide. Preliminaries Imports
###Code
import missmercs
import numpy as np
import pandas as pd
import sklearn
from sklearn.datasets import load_iris
###Output
_____no_output_____
###Markdown
Setup
###Code
iris = load_iris()
X = iris.get('data')
y = iris.get('target')
matrix = np.c_[X, y]
###Output
_____no_output_____
###Markdown
Quickstartsandboxes quickstart guide. Preliminaries Imports
###Code
import sandboxes
import numpy as np
import pandas as pd
import sklearn
from sklearn.datasets import load_iris
###Output
_____no_output_____
###Markdown
Setup
###Code
iris = load_iris()
X = iris.get('data')
y = iris.get('target')
matrix = np.c_[X, y]
###Output
_____no_output_____
###Markdown
QuickstartDummynator quickstart Preliminaries Imports
###Code
import dummynator
import numpy as np
import pandas as pd
import sklearn
from sklearn.datasets import load_iris
###Output
_____no_output_____
###Markdown
Setup
###Code
iris = load_iris()
X = iris.get('data')
y = iris.get('target')
matrix = np.c_[X, y]
matrix.shape
###Output
_____no_output_____
###Markdown
Fit
###Code
from dummynator import Dummynator
clf = Dummynator()
clf.fit(matrix, strategy='prior')
###Output
_____no_output_____
###Markdown
Predict
###Code
clf.predict(X, 4)
###Output
_____no_output_____
###Markdown
Quickstartalso_anomaly_detector quickstart guide. Preliminaries Imports
###Code
import also_anomaly_detector
import numpy as np
import pandas as pd
import sklearn
from sklearn.datasets import load_iris
###Output
_____no_output_____
###Markdown
Setup
###Code
iris = load_iris()
X = iris.get('data')
y = iris.get('target')
matrix = np.c_[X, y]
###Output
_____no_output_____
###Markdown
Quickstartnba-anomaly-generator quickstart guide. Preliminaries Imports
###Code
import nba-anomaly-generator
import numpy as np
import pandas as pd
import sklearn
from sklearn.datasets import load_iris
###Output
_____no_output_____
###Markdown
Setup
###Code
iris = load_iris()
X = iris.get('data')
y = iris.get('target')
matrix = np.c_[X, y]
###Output
_____no_output_____
###Markdown
Quickstartaffe quickstart guide. Preliminaries
###Code
# This is a code-formatter, you cann comment it without losing functionality
%load_ext lab_black
###Output
_____no_output_____
###Markdown
Imports
###Code
import affe
import numpy as np
import pandas as pd
from affe.execs import (
CompositeExecutor,
NativeExecutor,
JoblibExecutor,
GNUParallelExecutor,
)
from affe import Flow
from affe.tests import get_dummy_flow
###Output
_____no_output_____
###Markdown
Basic Illustration: Flows saying _"hi"_To illustrate, let us create 10 different workflows. Each of those says "hi" in a signature way.
###Code
# Making a flow is very easy.
flows = [
get_dummy_flow(message="hi" * (i + 1), content=dict(i=i * 10)) for i in range(3)
]
flow = flows[0]
flow.config
###Output
_____no_output_____
###Markdown
Flow ExecutionNow you can print some hello worlds, embedded in a Flow object.
###Code
flow.run()
flow.run_with_log()
flows[1].run_with_log()
###Output
Hello world
2 secs passed
hi
###Markdown
Flow Scheduling= Execution of multiple flows, for instance via a tool like `joblib`
###Code
e = NativeExecutor
c_jl = JoblibExecutor(flows, e, n_jobs=3)
c_jl.run()
###Output
_____no_output_____
###Markdown
Manual Creation of FlowsThe "hi"-flows defined above were nice because they illustrate in the simplest way possible what a flow is and how it can be used. In this section, we dive in a bit deeper in how you can make a Flow yourself, from scratch. Your workflowTypicall, you start from a certain workflow. As illustrated above, a _workflow_ is a piece of work you care about, and you want to be able to execute it in a controlled, experiment-like fashion. Here, we assume you are interested in the archetype machine learning task of predicting the specifies of the Iris flower
###Code
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Load data
X, y = datasets.load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42
)
# Fit classifier
clf = DecisionTreeClassifier(max_depth=2)
clf.fit(X_train, y_train)
# Predict and Evaluate
y_pred = clf.predict(X_test)
score = accuracy_score(y_test, y_pred, normalize=True)
score
###Output
_____no_output_____
###Markdown
Make your _workflow_ into a _Flow_Now that you what you want to do, you want obtain a flow that implements this. The advantage is that annoying things like- logging- timeouts- execution- schedulingare all taken care of, as soon as you succeed. This means removing boilerplate, and using battle-tested code instead. Basic Example (passing a function as argument)In its most basic form, this is a really simple thing, as we can just throw in a random python function _directly_. Consider this the _lazy_ way of doing things, which is supported.The only assumption is that your `flow` function has one input, typically named `config`. For the time being, this is a fairly constant assumption across `affe`.
###Code
def hello_world(config):
print("Hello World")
return
f = Flow(flow=hello_world)
###Output
_____no_output_____
###Markdown
So that's nice and all, this is quick and dirty and it fails when you are trying to run this through a more advanced executor, such as one with logging.
###Code
f.run_with_log()
###Output
_____no_output_____
###Markdown
If you check the logfile, you can get some information as to why this is happening. Essentially, a common problem with abstracted execution is that you do need to have some kind of persistence of the code you wish to run. This is just to motivate that at times, you would want to build your custom subclass `Flow` object, which will not be plagued by such limitations. Your Flow as a Flow-SubclassThis, we could consider the right way to do things in `affe`- Subclass the Flow class- Add anything you like Implementation in Notebook
###Code
from affe import Flow
from time import sleep
class IrisFlow(Flow):
def __init__(self, max_depth=None, sleep_seconds=0, **kwargs):
"""
All the information you want to pass inside the flow function,
you can embed in the config dictionary.
"""
self.config = dict(max_depth=max_depth, sleep_seconds=sleep_seconds)
super().__init__(config=self.config, **kwargs)
return
@staticmethod
def imports():
"""For remote executions, you better specify your imports explicitly.
Depending on the use-case, this is not necessary, but it will never hurt.
"""
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from time import sleep
return
def flow(self, config):
"""
This function is basically a verbatim copy of your workflow above.
Prerequisites:
- This function has to be called flow
- It expects one input: config
The only design pattern to take into account is that you can assume one
input only, which then by definition constitutes your "configuration" for your workflow.
Whatever parameters you need, you can extract from this. This pattern is somewhat restricitive,
but if you are implementing experiments, you probably should be this strict anyway; you're welcome.
The other thing is the name of this function: it has to be "flow", in order for some of the
executioners to properly find it. Obviously, if your only usecase is to run the flow function
yourself, this does not matter at all. But in most cases it does, and again: adhering to this pattern
will never hurt you, deviation could.
"""
# Obtain configuration
max_depth = config.get("max_depth", None)
sleep_seconds = config.get("sleep_seconds", 0)
print("I am about to execute the IRIS FLOW")
print("BUT FIRST: I shall sleep {} seconds".format(sleep_seconds))
sleep(sleep_seconds)
print("I WOKE UP, gonna do my stuff now.")
# Load data
X, y = datasets.load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42
)
# Fit classifier
clf = DecisionTreeClassifier(max_depth=max_depth)
clf.fit(X_train, y_train)
# Predict and Evaluate
y_pred = clf.predict(X_test)
score = accuracy_score(y_test, y_pred, normalize=True)
msg = """
I am DONE executing the IRIS FLOW
"""
print(msg)
return score
###Output
_____no_output_____
###Markdown
TryoutNow, we can verify how this thing works.
###Code
iris_flow_02 = IrisFlow(max_depth=1)
iris_flow_02.run()
iris_flow_10 = IrisFlow(max_depth=10)
iris_flow_10.run()
###Output
I am about to execute the IRIS FLOW
BUT FIRST: I shall sleep 0 seconds
I WOKE UP, gonna do my stuff now.
I am DONE executing the IRIS FLOW
###Markdown
Implementation in CodebaseAlright, that looked pretty nice already. Now the question is: _what is in it for me?_ Well you get:- logging- timeouts- boilerplate filesystem managment- fancy executioners- and so much more! So let's dive into that.However, the `IrisFlow` object does not exist outside of our Jupyter notebook, and that is unfortunately not OK for `affe` when running something in a subprocess/another shell, which is what you need to get these fancy functionalities.But, allow us to resume via a demonstration flow, which learns a decision tree on the iris dataset (yes, exactly what we were doing with our IrisFlow already). You can check the source code to verify that this does exactly the same thing as the IrisFlow above, with then the added feature that `IrisDemo` actually exists in your python path etc. ImportLet us import the `IrisDemo` object, and demonstrate that it behaves exactly similar.
###Code
from affe.demo import IrisDemo
demoflow = IrisDemo(max_depth=3, log_filepath="logs/irisdemo")
demoflow.run()
###Output
I am about to execute the IRIS FLOW
BUT FIRST: I shall sleep 0 seconds
I WOKE UP, gonna do my stuff now.
I am DONE executing the IRIS FLOW
###Markdown
LoggingDepending how you run the flow, another executioner is called in the backend. And some of those executors actually give you logging outside of the box, if you do it right.In our case, we need this one:- `DTAIExperimenterProcessExecutor` which is used in the `run_with_log_via_shell` functionAdditionally, if we specify the logfile parameter, we can give the logfiles custom names etc, which allows us to demonstrate.
###Code
demoflow = IrisDemo(max_depth=3, log_filepath="logs/irisdemo")
demoflow.run_with_log_via_shell()
###Output
_____no_output_____
###Markdown
TimeoutsTo see how the timeouts work, we can use the "sleep" functionality to enforce our iris flow to take a bit longer. If force the workflow (due to sleeping) to take longer than the timeout, execution will abort.
###Code
# this will just work, because the run() method has no notion of timeout
iris_flow = IrisDemo(
max_depth=10, sleep_seconds=5, log_filepath="logs/via-subprocess", timeout_s=3
)
iris_flow.run()
###Output
I am about to execute the IRIS FLOW
BUT FIRST: I shall sleep 5 seconds
I WOKE UP, gonna do my stuff now.
I am DONE executing the IRIS FLOW
###Markdown
So in this case, nothing really happens. Things change, however, when executing through shell.
###Code
# timeout is higher than the actual execution time
iris_flow = IrisDemo(
max_depth=10, sleep_seconds=2, log_filepath="logs/timeout-sufficient", timeout_s=10
)
iris_flow.run_with_log_via_shell()
# timeout lower than execution time
iris_flow = IrisDemo(
max_depth=10,
sleep_seconds=15,
log_filepath="logs/timeout-insufficient",
timeout_s=10,
)
iris_flow.run_with_log_via_shell()
###Output
_____no_output_____
###Markdown
You can check those logfiles yourself, and see what happens. The second logfile will tell you that it aborted due to hitting its timelimit, as it should. Filesystem ManagementThis is not _by default_ in a Flow object, in order to keep things clean. However, there exists another object, which is called `FlowOne`. This is still very much a bare-bones object: it is a subclass of Flow, with some minimal bookkeeping for a common experimental filesystem configuration baked in.In that way, it becomes a very nice starting point for future extensions.
###Code
from affe.flow import FlowOne
def hello_world(config):
print("Hello World")
return
f = FlowOne(flow=hello_world, identifier="HelloWorld")
# The logfile will end up inside this out directory
f.out_dp
f.run_with_log()
###Output
_____no_output_____
###Markdown
Quickstartresidual_anomaly_detector quickstart guide. Preliminaries Imports
###Code
import residual_anomaly_detector
import numpy as np
import pandas as pd
import sklearn
from sklearn.datasets import load_iris
###Output
_____no_output_____
###Markdown
Setup
###Code
iris = load_iris()
X = iris.get('data')
y = iris.get('target')
matrix = np.c_[X, y]
###Output
_____no_output_____
###Markdown
Quickstartelki_interface quickstart guide. Preliminaries Imports
###Code
import elki_interface
import numpy as np
import pandas as pd
import sklearn
from sklearn.datasets import load_iris
###Output
_____no_output_____
###Markdown
Setup
###Code
iris = load_iris()
X = iris.get('data')
y = iris.get('target')
matrix = np.c_[X, y]
###Output
_____no_output_____ |
0307 - RERWITE Rebalancing for Benchmark-Add unrealized GainLoss.ipynb | ###Markdown
nominal_price_resul[0-3]: The Transaction Records for evenly rebalancing the portfolio for [1 year, 6 months, 3 months, 1 months], REGARDLESS of Commission Cost and FX Change;actual_price_result[0-3]: The Transaction Records for evenly rebalancing the portfolio for [1 year, 6 months, 3 months, 1 months], CONSIDERING of Commission Cost and FX Change;
###Code
price_df_list = pickle.load(open("0306-adjusted market prices.out", "rb"))
plt.plot(nominal_price_result)
###Output
_____no_output_____
###Markdown
price_df_list[0-2]: The History data for three indexes [^BVSP, ^TWII, ^IXIC] within a given range, with Nominal Price['Price'], and Actual Price ['Actual Price'], where the Cummulative FX Change ['Cum FX Change'] is considered
###Code
nominal_plot_data_list = []
actual_plot_data_list = []
# for balance_freq in range(4):
for balance_freq in [1]: # We only draw the 6-month rebalancing
# 1. Get Nominal Price Transaction Records
nominal = nominal_price_result[balance_freq]
trans_date_df = pd.DataFrame([tmp['Date']for tmp in nominal], columns = ['Date'])
# ^ Action dates in the five years
nominal_trans_df_list = []
for i in range(3): # nominal_df_list[0-2] for three assets
asset_df = pd.DataFrame([tmp['Record'][i] for tmp in nominal])
# ^ Get the asset weightage at each time
result_df = pd.concat([trans_date_df, asset_df], axis=1)
result_df.rename(columns={'0':'Date'}, inplace=True)
nominal_trans_df_list.append(result_df)
# 2. Get Actual Price Transaction Records
actual = actual_price_result[balance_freq]
trans_date_df = pd.DataFrame([tmp['Date']for tmp in actual], columns = ['Date'])
actual_trans_df_list = []
for i in range(3):
asset_df = pd.DataFrame([tmp['Record'][i] for tmp in actual])
result_df = pd.concat([trans_date_df, asset_df], axis=1)
result_df.rename(columns={'0':'Date'}, inplace=True)
actual_trans_df_list.append(result_df)
for market_num in range(3):
tmp_trans_df = actual_trans_df_list[market_num]
trans_date = tmp_trans_df['Date']
start_date = list(trans_date)[0]
end_date = list(price_df_list[0]['Date'])[-1]
history_df = price_df_list[market_num]
all_price_date = history_df['Date'][(history_df['Date']>=start_date) & (history_df['Date']<= end_date)]
plot_data = []
number = 0
net_value = 0
price = 0
for date in all_price_date:
if (trans_date == date).any(): # If rebalanced at that day:
number = tmp_trans_df['Number'][tmp_trans_df['Date']==date].values[0]
net_value = tmp_trans_df['Net Value'][tmp_trans_df['Date']==date].values[0]
price = tmp_trans_df['Price'][tmp_trans_df['Date']==date].values[0]
else:
price = history_df['Actual Price'][history_df['Date']==date].values[0]
net_value = number*price
plot_data.append({
"Date": date,
"Number": number,
"Price": price,
"Net Value": net_value
})
actual_plot_data_list.append(plot_data)
for market_num in range(3):
tmp_trans_df = nominal_trans_df_list[market_num]
trans_date = tmp_trans_df['Date']
start_date = list(trans_date)[0]
end_date = list(price_df_list[0]['Date'])[-1]
history_df = price_df_list[market_num]
all_price_date = history_df['Date'][(history_df['Date']>=start_date) & (history_df['Date']<= end_date)]
plot_data = []
number = 0
net_value = 0
price = 0
for date in all_price_date:
if (trans_date == date).any(): # If rebalanced at that day:
number = tmp_trans_df['Number'][tmp_trans_df['Date']==date].values[0]
net_value = tmp_trans_df['Net Value'][tmp_trans_df['Date']==date].values[0]
price = tmp_trans_df['Price'][tmp_trans_df['Date']==date].values[0]
else:
price = history_df['Actual Price'][history_df['Date']==date].values[0]
net_value = number*price
plot_data.append({
"Date": date,
"Number": number,
"Price": price,
"Net Value": net_value
})
nominal_plot_data_list.append(plot_data)
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.gridspec import GridSpec
from CSVUtils import *
DIR = "./from github/Stock-Trading-Environment/data"
file_names = ["^BVSP", "^TWII", "^IXIC"]
source_list = ["yahoo", "yahoo", "yahoo"]
nominal_labels = ["high risk-^BVSP_nominal", "mid risk-^TWII_nominal", "low risk-^IXIC"]
actual_labels = ["high risk-^BVSP_actual", "mid risk-^TWII_actual", "low rick-^IXIC"]
plt.rcParams['figure.facecolor'] = 'white'
fig=plt.figure(figsize=(40,25))
axs = []
gs=GridSpec(5,1) # 5 rows, 1 columns
axs.append(fig.add_subplot(gs[0,0])) # First row, first column
axs.append(fig.add_subplot(gs[1,0])) # First row, second column
axs.append(fig.add_subplot(gs[2,0])) # First row, third column
axs.append(fig.add_subplot(gs[3:,:])) # Second row, span all columns
for i, plot_data in enumerate(nominal_plot_data_list):
plot_data = pd.DataFrame(plot_data)
axs[i].plot(plot_data['Date'], np.log(plot_data['Net Value']/plot_data['Net Value'][0]),
color="C0", label = nominal_labels[i]+"_Log Market Value")
axs[i].bar(nominal_trans_df_list[i]['Date'], np.log(nominal_trans_df_list[i]['Net Value']/nominal_trans_df_list[i]['Net Value'][0]),
width=2, color="C0")
axs[i].plot(nominal_trans_df_list[i]['Date'], np.log(nominal_trans_df_list[i]['Net Value']/nominal_trans_df_list[i]['Net Value'][0]),
linestyle='--', color="C0", label = nominal_labels[i]+"_Log Book Value")
axs[i].axhline(y=0, color = "grey", linestyle='--')
axs[i].legend()
axs[i].set_title('Portfolio Weights')
axs[i].set_xlabel('Date')
axs[i].set_ylabel('Market Value (US$)')
for i, plot_data in enumerate(actual_plot_data_list):
plot_data = pd.DataFrame(plot_data)
axs[i].plot(plot_data['Date'], np.log(plot_data['Net Value']/plot_data['Net Value'][0]),
color="orange", label = actual_labels[i]+"_Log Market Value")
axs[i].bar(actual_trans_df_list[i]['Date'], np.log(actual_trans_df_list[i]['Net Value']/actual_trans_df_list[i]['Net Value'][0]),
width=2, color="orange")
axs[i].plot(actual_trans_df_list[i]['Date'], np.log(actual_trans_df_list[i]['Net Value']/actual_trans_df_list[i]['Net Value'][0]),
linestyle='--', color="orange", label = actual_labels[i]+"_Log Book Value")
axs[i].plot(price_df_list[i]['Date'], np.log(price_df_list[i]['Cum FX Change']),
color="green", linestyle='--', label = nominal_labels[i]+"_Log FX Change")
axs[i].axhline(y=0, color = "grey", linestyle='--')
# axs[i].set_ylim((0, 300000))
axs[i].legend()
axs[i].set_title('Log Portfolio Value')
axs[i].set_xlabel('Date')
axs[i].set_ylabel('Log Value')
for i in range(2,-1,-1): # Inverse: Low-Mid-High
df = csv2df(DIR, file_names[i]+".csv",source = source_list[i])
df['Date'] = pd.to_datetime(df['Date'])
df = df[(df['Date']>=pd.to_datetime("2015-01-01"))&(df['Date']<=pd.to_datetime("2019-12-31"))].reset_index(drop=True)
j = 0
init_price = df['Price'][j]
while np.isnan(init_price):
j+=1
init_price = df['Price'][j]
y = np.log(df['Price'][j:] / init_price)
x = df['Date'][j:]
axs[3].plot(x,y,label = nominal_labels[i])
axs[3].axhline(y=0, color = "grey", linestyle='--')
# axs[3].set_ylim((-1,1))
axs[3].legend()
axs[3].set_title('Log Market Price')
axs[3].set_xlabel('Date')
axs[3].set_ylabel('log(Market Price)')
plt.show()
###Output
_____no_output_____ |
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/DIFFERENT/10_MINUTES_TO_PANDAS.ipynb | ###Markdown
Reduction in the dimensions of the returned object:
###Code
df.loc["20130102", ["A","B"]]
###Output
_____no_output_____
###Markdown
For getting a scalar value:
###Code
df.loc[dates[0], "A"]
###Output
_____no_output_____
###Markdown
For getting fast access to a scalar (equivalent to the prior method):
###Code
df.at[dates[0], "A"]
###Output
_____no_output_____
###Markdown
Selection by position Select via the position of the passed integers:
###Code
df.iloc[3]
###Output
_____no_output_____
###Markdown
By integer slices, acting similar to numpy/Python:
###Code
df.iloc[3:5 , 0:2]
###Output
_____no_output_____
###Markdown
By lists of integer position locations, similar to the NumPy/Python style:
###Code
df.iloc[[1, 2, 4], [0, 2]]
###Output
_____no_output_____
###Markdown
For slicing rows explicitly:
###Code
df.iloc[1:3, :]
###Output
_____no_output_____
###Markdown
For slicing columns explicitly:
###Code
df.iloc[ : , 1:3 ]
###Output
_____no_output_____
###Markdown
For getting a value explicitly:
###Code
df.iloc[1,1]
###Output
_____no_output_____
###Markdown
For getting fast access to a scalar (equivalent to the prior method):
###Code
df.iat[1,1]
###Output
_____no_output_____
###Markdown
Boolean indexing Using a single column’s values to select data.
###Code
df[df["A"] > 0]
###Output
_____no_output_____
###Markdown
Selecting values from a DataFrame where a boolean condition is met.
###Code
df[df > 0]
###Output
_____no_output_____
###Markdown
Using the isin() method for filtering:
###Code
df2 = df.copy()
df2["E"] = ["one", "one", "two", "three", "four", "three"]
df2
df2[df2["E"].isin(["two", "four"])]
###Output
_____no_output_____
###Markdown
Setting Setting a new column automatically aligns the data by the indexes.
###Code
s1 = pd.Series([1, 2, 3, 4, 5, 6], index=pd.date_range("20130102", periods=6))
s1
df["F"] = s1
###Output
_____no_output_____
###Markdown
Setting values by label:
###Code
df.at[dates[0], "A"] = 0
###Output
_____no_output_____
###Markdown
Setting values by position:
###Code
df.iat[0, 1] = 0
###Output
_____no_output_____
###Markdown
Setting by assigning with a NumPy array:
###Code
df.loc[:, "D"] = np.array([5] * len(df))
###Output
_____no_output_____
###Markdown
The result of the prior setting operations.
###Code
df
###Output
_____no_output_____
###Markdown
A where operation with setting.
###Code
df3 = df.copy()
df3[df3>0] = - df3
df3
###Output
_____no_output_____
###Markdown
Missing data pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See the Missing Data section. Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data.
###Code
df4 = df.reindex(index=dates[0:4], columns=list(df.columns) + ["E"])
df4.loc[dates[0] : dates[1], "E"] = 1
df4
###Output
_____no_output_____
###Markdown
To drop any rows that have missing data.
###Code
df4.dropna(how="any")
###Output
_____no_output_____
###Markdown
Filling missing data.
###Code
df4.fillna(value=5)
###Output
_____no_output_____
###Markdown
To get the boolean mask where values are nan.
###Code
pd.isnull(df4)
###Output
_____no_output_____
###Markdown
OPERATIONS Stats Operations in general exclude missing data Performing a descriptive statistic
###Code
df.mean()
###Output
_____no_output_____
###Markdown
Same operation on the other axis:
###Code
df.mean(1)
###Output
_____no_output_____
###Markdown
Operating with objects that have different dimensionality and need alignment. In addition, pandasautomatically broadcasts along the specified dimension.
###Code
s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(2)
s
df.sub(s, axis='index')
###Output
_____no_output_____
###Markdown
Apply Applying functions to the data
###Code
df.apply(np.cumsum)
df.apply(lambda x: x.max() - x.min())
###Output
_____no_output_____
###Markdown
Histogramming
###Code
s = pd.Series(np.random.randint(0, 7, size=10))
s
s.value_counts()
###Output
_____no_output_____
###Markdown
String Method Series is equipped with a set of string processing methods in the str attribute that make it easy tooperate on each element of the array, as in the code snippet below. Note that patternmatchinginstr generally uses regular expressions by default (and in some cases always uses them).
###Code
s = pd.Series(['A', 'B', 'C', 'Aa145ba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
s.str.lower()
###Output
_____no_output_____
###Markdown
Merge Concat Pandas provides various facilities for easily combining together Series, DataFrame, and Panelobjects with various kinds of set logic for the indexes and relational algebra functionality in the caseof join / mergetypeoperations. Concatenating pandas objects together with **concat()**:
###Code
df = pd.DataFrame(np.random.randn(10, 4))
df
###Output
_____no_output_____
###Markdown
break it into pieces
###Code
pieces = [df[:3], df[3:7], df[7:]]
pieces
pd.concat(pieces)
###Output
_____no_output_____
###Markdown
Join SQL style merges:
###Code
left = pd.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})
right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})
left
right
pd.merge(left, right, on="key")
###Output
_____no_output_____
###Markdown
Append Append rows to a dataframe:
###Code
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
df
s = df.iloc[3]
s
df.append(s, ignore_index=True)
df
###Output
_____no_output_____
###Markdown
Grouping By **“group by”** we are referring to a process involving one or more of the following steps: * **Spliting** the data into groups based on some criteria* **Applying** a function to each group independently* **Combining** the results into a data structure
###Code
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo'],
'B' : ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three'],
'C' : np.random.randn(8),
'D' : np.random.randn(8)})
df
###Output
_____no_output_____
###Markdown
Grouping and then applying a function **sum** to the resulting groups:
###Code
df.groupby('A').sum()
df.groupby('B').sum()
###Output
_____no_output_____
###Markdown
Grouping by multiple columns forms a hierarchical index, which we then apply the function:
###Code
df.groupby(["A", "B"]).sum()
df.groupby(["B", "A"]).sum()
###Output
_____no_output_____
###Markdown
Reshaping Stack
###Code
tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
....: 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two',
....: 'one', 'two', 'one', 'two']]))
tuples
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
df
df2 = df[:4]
df2
###Output
_____no_output_____
###Markdown
The **stack()** method “compresses” a level in the DataFrame’s columns.
###Code
stacked = df2.stack()
stacked
pd.DataFrame(stacked)
###Output
_____no_output_____
###Markdown
With a “stacked” DataFrame or Series (having a **MultiIndex** as the index), the inverse operation of**stack()** is **unstack()**, which by default unstacks the last level:
###Code
stacked.unstack()
stacked.unstack(1)
stacked.unstack(0)
###Output
_____no_output_____
###Markdown
Pivot Tables
###Code
df = pd.DataFrame({'A' : ['one', 'one', 'two', 'three'] * 3,
'B' : ['A', 'B', 'C'] * 4,
'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
'D' : np.random.randn(12),
'E' : np.random.randn(12)})
df
###Output
_____no_output_____
###Markdown
We can produce **pivot tables** from this data very easily:
###Code
pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'])
###Output
_____no_output_____
###Markdown
Time Series Pandas has simple, powerful, and efficient functionality for performing resampling operations duringfrequency conversion (e.g., converting secondly data into 5minutelydata). This is extremelycommon in, but not limited to, financial applications
###Code
rng = pd.date_range('1/1/2012', periods=100, freq='S')
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
ts
ts.resample('5Min')
###Output
_____no_output_____
###Markdown
Time zone representation
###Code
rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D')
ts = pd.Series(np.random.randn(len(rng)), rng)
ts
ts_utc = ts.tz_localize('UTC')
ts_utc
###Output
_____no_output_____
###Markdown
**Convert to another time zone**
###Code
rng = pd.date_range('1/1/2012', periods=5, freq='M')
ts = pd.Series(np.random.randn(len(rng)), index=rng)
ts
ps = ts.to_period()
ps
ps.to_timestamp()
###Output
_____no_output_____
###Markdown
Converting between period and timestamp enables some convenient arithmetic functions to beused. In the following example, we convert a quarterly frequency with year ending in November to9am of the end of the month following the quarter end:
###Code
prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV')
ts = pd.Series(np.random.randn(len(prng)), prng)
ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9
ts.head()
###Output
_____no_output_____
###Markdown
Categoricals Since version 0.15, pandas can include categorical data in a **DataFrame**.
###Code
df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a','a','e']})
###Output
_____no_output_____
###Markdown
Convert the raw grades to a categorical data type.
###Code
df["grade"] = df["raw_grade"].astype("category")
df["grade"]
###Output
_____no_output_____
###Markdown
Rename the categories to more meaningful names (assigning to **Series.cat.categories()** isinplace!)
###Code
df["grade"].cat.categories = ["very good", "good", "very bad"]
###Output
_____no_output_____
###Markdown
Reorder the categories and simultaneously add the missing categories (methods under **Series.cat()** return a new **Series** per default).
###Code
df["grade"] = df["grade"].cat.set_categories(
["very bad", "bad", "medium", "good", "very good"]
)
df["grade"]
###Output
_____no_output_____
###Markdown
Sorting is per order in the categories, not lexical order:
###Code
df.sort_values(by="grade")
###Output
_____no_output_____
###Markdown
Grouping by a categorical column also shows empty categories:
###Code
df.groupby("grade").size()
###Output
_____no_output_____
###Markdown
Plotting
###Code
import matplotlib.pyplot as plt
plt.close("all")
ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
ts = ts.cumsum()
ts.plot()
###Output
_____no_output_____
###Markdown
On a DataFrame, the **plot()** method is a convenience to plot all of the columns with labels:
###Code
df = pd.DataFrame(
np.random.randn(1000, 4), index=ts.index, columns=["A", "B", "C", "D"]
)
df = df.cumsum()
plt.figure()
df.plot()
plt.legend(loc='best')
###Output
_____no_output_____
###Markdown
Getting data in/out CSV Writing to a csv file:
###Code
df.to_csv("10mpandas.csv")
###Output
_____no_output_____
###Markdown
Reading from a csv file:
###Code
pd.read_csv("10mpandas.csv")
###Output
_____no_output_____
###Markdown
10 minutes to pandas https://pandas.pydata.org/pandas-docs/stable/user_guide/10min.html This is a short introduction to pandas, geared mainly for new users. Customarily, we import as follows:
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Object creation Creating a Series by passing a list of values, letting pandas create a default integer index:
###Code
s = pd.Series([1, 3, 5, np.nan, 6, 8])
s
###Output
_____no_output_____
###Markdown
Creating a DataFrame by passing a NumPy array, with a datetime index and labeled columns:
###Code
dates = pd.date_range("20130101", periods=6)
dates
df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list("ABCD"))
df
###Output
_____no_output_____
###Markdown
Creating a DataFrame by passing a dict of objects that can be converted to series-like.
###Code
df2 = pd.DataFrame(
{
"A": 1.0,
"B": pd.Timestamp("20130102"),
"C": pd.Series(1, index=list(range(4)), dtype="float32"),
"D": np.array([3] * 4, dtype="int32"),
"E": pd.Categorical(["test", "train", "test", "train"]),
"F": "foo",
}
)
df2
###Output
_____no_output_____
###Markdown
The columns of the resulting DataFrame have different dtypes.
###Code
df2.dtypes
###Output
_____no_output_____
###Markdown
Viewing data Here is how to view the top and bottom rows of the frame:
###Code
df.head()
df.tail(4)
###Output
_____no_output_____
###Markdown
Display the index, columns:
###Code
df.index
df.columns
###Output
_____no_output_____
###Markdown
DataFrame.to_numpy() gives a NumPy representation of the underlying data. Note that this can be an expensive operation when your DataFrame has columns with different data types, which comes down to a fundamental difference between pandas and NumPy: NumPy arrays have one dtype for the entire array, while pandas DataFrames have one dtype per column. When you call DataFrame.to_numpy(), pandas will find the NumPy dtype that can hold all of the dtypes in the DataFrame. This may end up being object, which requires casting every value to a Python object. For df, our DataFrame of all floating-point values, DataFrame.to_numpy() is fast and doesn’t require copying data.
###Code
df.to_numpy()
###Output
_____no_output_____
###Markdown
For df2, the DataFrame with multiple dtypes, DataFrame.to_numpy() is relatively expensive.
###Code
df2.to_numpy()
###Output
_____no_output_____
###Markdown
Note:**DataFrame.to_numpy()** does not include the index or column labels in the output. **describe()** shows a quick statistic summary of your data:
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Transposing your data:
###Code
df.T
df.describe().T
###Output
_____no_output_____
###Markdown
Sorting by an axis:
###Code
df.sort_index(axis=1, ascending=False)
###Output
_____no_output_____
###Markdown
Sorting by values:
###Code
df.sort_values(by="B")
###Output
_____no_output_____
###Markdown
Selection: Getting Selecting a single column, which yields a Series, equivalent to df.A:
###Code
df["A"]
###Output
_____no_output_____
###Markdown
Selecting via [], which slices the rows.
###Code
df[0:3]
df["20130102":"20130104"]
###Output
_____no_output_____
###Markdown
Selection by label For getting a cross section using a label:
###Code
df.loc[dates[0]]
###Output
_____no_output_____
###Markdown
Selecting on a multi-axis by label:
###Code
df.loc[:,["A","B"]]
###Output
_____no_output_____
###Markdown
Showing label slicing, both endpoints are included:
###Code
df.loc["20130102":"20130104", ["A","B"]]
###Output
_____no_output_____ |
notebooks/Python-in-2-days/D1_L6_MatPlotLib_and_Seaborn/14-Visualization-With-Seaborn.ipynb | ###Markdown
Visualization with Seaborn Matplotlib has proven to be an incredibly useful and popular visualization tool, but even avid users will admit it often leaves much to be desired.There are several valid complaints about Matplotlib that often come up:- Prior to version 2.0, Matplotlib's defaults are not exactly the best choices. It was based off of MATLAB circa 1999, and this often shows.- Matplotlib's API is relatively low level. Doing sophisticated statistical visualization is possible, but often requires a *lot* of boilerplate code.- Matplotlib predated Pandas by more than a decade, and thus is not designed for use with Pandas ``DataFrame``s. In order to visualize data from a Pandas ``DataFrame``, you must extract each ``Series`` and often concatenate them together into the right format. It would be nicer to have a plotting library that can intelligently use the ``DataFrame`` labels in a plot.An answer to these problems is [Seaborn](http://seaborn.pydata.org/). Seaborn provides an API on top of Matplotlib that offers sane choices for plot style and color defaults, defines simple high-level functions for common statistical plot types, and integrates with the functionality provided by Pandas ``DataFrame``s.To be fair, the Matplotlib team is addressing this: it has recently added the ``plt.style`` tools discussed in *Customizing Matplotlib: Configurations and Style Sheets*, and is starting to handle Pandas data more seamlessly.The 2.0 release of the library will include a new default stylesheet that will improve on the current status quo.But for all the reasons just discussed, Seaborn remains an extremely useful addon. Seaborn Versus MatplotlibHere is an example of a simple random-walk plot in Matplotlib, using its classic plot formatting and colors.We start with the typical imports:
###Code
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Now we create some random walk data:
###Code
# Create some data
rng = np.random.RandomState(0)
x = np.linspace(0, 10, 500)
y = np.cumsum(rng.randn(500, 6), 0)
###Output
_____no_output_____
###Markdown
And do a simple plot:
###Code
# Plot the data with Matplotlib defaults
plt.plot(x, y)
plt.legend('ABCDEF', ncol=2, loc='upper left');
###Output
_____no_output_____
###Markdown
Although the result contains all the information we'd like it to convey, it does so in a way that is not all that aesthetically pleasing, and even looks a bit old-fashioned in the context of 21st-century data visualization.Now let's take a look at how it works with Seaborn.As we will see, Seaborn has many of its own high-level plotting routines, but it can also overwrite Matplotlib's default parameters and in turn get even simple Matplotlib scripts to produce vastly superior output.We can set the style by calling Seaborn's ``set()`` method.By convention, Seaborn is imported as ``sns``:
###Code
import seaborn as sns
sns.set()
###Output
_____no_output_____
###Markdown
Now let's rerun the same two lines as before:
###Code
# same plotting code as above!
plt.plot(x, y)
plt.legend('ABCDEF', ncol=2, loc='upper left');
###Output
_____no_output_____
###Markdown
Ah, much better! Exploring Seaborn PlotsThe main idea of Seaborn is that it provides high-level commands to create a variety of plot types useful for statistical data exploration, and even some statistical model fitting.Let's take a look at a few of the datasets and plot types available in Seaborn. Note that all of the following *could* be done using raw Matplotlib commands (this is, in fact, what Seaborn does under the hood) but the Seaborn API is much more convenient. Histograms, KDE, and densitiesOften in statistical data visualization, all you want is to plot histograms and joint distributions of variables.We have seen that this is relatively straightforward in Matplotlib:
###Code
data = np.random.multivariate_normal([0, 0], [[5, 2], [2, 2]], size=2000)
data = pd.DataFrame(data, columns=['x', 'y'])
for col in 'xy':
plt.hist(data[col], normed=True, alpha=0.5)
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:5: MatplotlibDeprecationWarning:
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
"""
###Markdown
Rather than a histogram, we can get a smooth estimate of the distribution using a kernel density estimation, which Seaborn does with ``sns.kdeplot``:
###Code
for col in 'xy':
sns.kdeplot(data[col], shade=True)
###Output
_____no_output_____
###Markdown
Histograms and KDE can be combined using ``distplot``:
###Code
sns.distplot(data['x'])
sns.distplot(data['y']);
###Output
_____no_output_____
###Markdown
If we pass the full two-dimensional dataset to ``kdeplot``, we will get a two-dimensional visualization of the data:
###Code
sns.kdeplot(data);
###Output
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\distributions.py:679: UserWarning: Passing a 2D dataset for a bivariate plot is deprecated in favor of kdeplot(x, y), and it will cause an error in future versions. Please update your code.
warnings.warn(warn_msg, UserWarning)
###Markdown
We can see the joint distribution and the marginal distributions together using ``sns.jointplot``.For this plot, we'll set the style to a white background:
###Code
with sns.axes_style('white'):
sns.jointplot("x", "y", data, kind='kde');
###Output
_____no_output_____
###Markdown
There are other parameters that can be passed to ``jointplot``—for example, we can use a hexagonally based histogram instead:
###Code
with sns.axes_style('white'):
sns.jointplot("x", "y", data, kind='hex')
###Output
_____no_output_____
###Markdown
Pair plotsWhen you generalize joint plots to datasets of larger dimensions, you end up with *pair plots*. This is very useful for exploring correlations between multidimensional data, when you'd like to plot all pairs of values against each other.We'll demo this with the well-known Iris dataset, which lists measurements of petals and sepals of three iris species:
###Code
iris = sns.load_dataset("iris")
iris.head()
###Output
_____no_output_____
###Markdown
Visualizing the multidimensional relationships among the samples is as easy as calling ``sns.pairplot``:
###Code
sns.pairplot(iris, hue='species', size=2.5);
###Output
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\axisgrid.py:2065: UserWarning: The `size` parameter has been renamed to `height`; pleaes update your code.
warnings.warn(msg, UserWarning)
###Markdown
Faceted histogramsSometimes the best way to view data is via histograms of subsets. Seaborn's ``FacetGrid`` makes this extremely simple.We'll take a look at some data that shows the amount that restaurant staff receive in tips based on various indicator data:
###Code
tips = sns.load_dataset('tips')
tips.head()
tips['tip_pct'] = 100 * tips['tip'] / tips['total_bill']
grid = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True)
grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15));
###Output
_____no_output_____
###Markdown
Factor plotsFactor plots can be useful for this kind of visualization as well. This allows you to view the distribution of a parameter within bins defined by any other parameter:
###Code
with sns.axes_style(style='ticks'):
g = sns.factorplot("day", "total_bill", "sex", data=tips, kind="box")
g.set_axis_labels("Day", "Total Bill");
###Output
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\categorical.py:3666: UserWarning: The `factorplot` function has been renamed to `catplot`. The original name will be removed in a future release. Please update your code. Note that the default `kind` in `factorplot` (`'point'`) has changed `'strip'` in `catplot`.
warnings.warn(msg)
###Markdown
Joint distributionsSimilar to the pairplot we saw earlier, we can use ``sns.jointplot`` to show the joint distribution between different datasets, along with the associated marginal distributions:
###Code
with sns.axes_style('white'):
sns.jointplot("total_bill", "tip", data=tips, kind='hex')
###Output
_____no_output_____
###Markdown
The joint plot can even do some automatic kernel density estimation and regression:
###Code
sns.jointplot("total_bill", "tip", data=tips, kind='reg');
###Output
_____no_output_____
###Markdown
Bar plotsTime series can be plotted using ``sns.factorplot``. In the following example, we'll use the Planets data that we first saw in *Aggregation and Grouping*:
###Code
planets = sns.load_dataset('planets')
planets.head()
with sns.axes_style('white'):
g = sns.factorplot("year", data=planets, aspect=2,
kind="count", color='steelblue')
g.set_xticklabels(step=5)
###Output
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\categorical.py:3666: UserWarning: The `factorplot` function has been renamed to `catplot`. The original name will be removed in a future release. Please update your code. Note that the default `kind` in `factorplot` (`'point'`) has changed `'strip'` in `catplot`.
warnings.warn(msg)
###Markdown
We can learn more by looking at the *method* of discovery of each of these planets:
###Code
with sns.axes_style('white'):
g = sns.factorplot("year", data=planets, aspect=4.0, kind='count',
hue='method', order=range(2001, 2015))
g.set_ylabels('Number of Planets Discovered')
###Output
_____no_output_____
###Markdown
For more information on plotting with Seaborn, see the [Seaborn documentation](http://seaborn.pydata.org/), a [tutorial](http://seaborn.pydata.org/tutorial.htm), and the [Seaborn gallery](http://seaborn.pydata.org/examples/index.html). Example: Exploring Marathon Finishing TimesHere we'll look at using Seaborn to help visualize and understand finishing results from a marathon.I've scraped the data from sources on the Web, aggregated it and removed any identifying information, and put it on GitHub where it can be downloaded(if you are interested in using Python for web scraping, I would recommend [*Web Scraping with Python*](http://shop.oreilly.com/product/0636920034391.do) by Ryan Mitchell).We will start by downloading the data fromthe Web, and loading it into Pandas:
###Code
data = pd.read_csv('data/marathon-data.csv')
data.head()
###Output
_____no_output_____
###Markdown
By default, Pandas loaded the time columns as Python strings (type ``object``); we can see this by looking at the ``dtypes`` attribute of the DataFrame:
###Code
data.dtypes
###Output
_____no_output_____
###Markdown
Let's fix this by providing a converter for the times:
###Code
import datetime
def convert_time(s):
h, m, s = map(int, s.split(':'))
return datetime.timedelta(hours=h, minutes=m, seconds=s)
data = pd.read_csv('data/marathon-data.csv',
converters={'split':convert_time, 'final':convert_time})
data.head()
data.dtypes
###Output
_____no_output_____
###Markdown
That looks much better. For the purpose of our Seaborn plotting utilities, let's next add columns that give the times in seconds:
###Code
data['split_sec'] = data['split'].astype(int) / 1E9
data['final_sec'] = data['final'].astype(int) / 1E9
data.head()
###Output
_____no_output_____
###Markdown
To get an idea of what the data looks like, we can plot a ``jointplot`` over the data:
###Code
with sns.axes_style('white'):
g = sns.jointplot("split_sec", "final_sec", data, kind='hex')
g.ax_joint.plot(np.linspace(4000, 16000),
np.linspace(8000, 32000), ':k')
###Output
_____no_output_____
###Markdown
The dotted line shows where someone's time would lie if they ran the marathon at a perfectly steady pace. The fact that the distribution lies above this indicates (as you might expect) that most people slow down over the course of the marathon.If you have run competitively, you'll know that those who do the opposite—run faster during the second half of the race—are said to have "negative-split" the race.Let's create another column in the data, the split fraction, which measures the degree to which each runner negative-splits or positive-splits the race:
###Code
data['split_frac'] = 1 - 2 * data['split_sec'] / data['final_sec']
data.head()
###Output
_____no_output_____
###Markdown
Where this split difference is less than zero, the person negative-split the race by that fraction.Let's do a distribution plot of this split fraction:
###Code
sns.distplot(data['split_frac'], kde=False);
plt.axvline(0, color="k", linestyle="--");
sum(data.split_frac < 0)
###Output
_____no_output_____
###Markdown
Out of nearly 40,000 participants, there were only 250 people who negative-split their marathon.Let's see whether there is any correlation between this split fraction and other variables. We'll do this using a ``pairgrid``, which draws plots of all these correlations:
###Code
g = sns.PairGrid(data, vars=['age', 'split_sec', 'final_sec', 'split_frac'],
hue='gender', palette='RdBu_r')
g.map(plt.scatter, alpha=0.8)
g.add_legend();
###Output
_____no_output_____
###Markdown
It looks like the split fraction does not correlate particularly with age, but does correlate with the final time: faster runners tend to have closer to even splits on their marathon time.(We see here that Seaborn is no panacea for Matplotlib's ills when it comes to plot styles: in particular, the x-axis labels overlap. Because the output is a simple Matplotlib plot, however, the methods in *Customizing Ticks* can be used to adjust such things if desired.)The difference between men and women here is interesting. Let's look at the histogram of split fractions for these two groups:
###Code
sns.kdeplot(data.split_frac[data.gender=='M'], label='men', shade=True)
sns.kdeplot(data.split_frac[data.gender=='W'], label='women', shade=True)
plt.xlabel('split_frac');
###Output
_____no_output_____
###Markdown
The interesting thing here is that there are many more men than women who are running close to an even split!This almost looks like some kind of bimodal distribution among the men and women. Let's see if we can suss-out what's going on by looking at the distributions as a function of age.A nice way to compare distributions is to use a *violin plot*
###Code
sns.violinplot("gender", "split_frac", data=data,
palette=["lightblue", "lightpink"]);
###Output
_____no_output_____
###Markdown
This is yet another way to compare the distributions between men and women.Let's look a little deeper, and compare these violin plots as a function of age. We'll start by creating a new column in the array that specifies the decade of age that each person is in:
###Code
data['age_dec'] = data.age.map(lambda age: 10 * (age // 10))
data.head()
men = (data.gender == 'M')
women = (data.gender == 'W')
with sns.axes_style(style=None):
sns.violinplot("age_dec", "split_frac", hue="gender", data=data,
split=True, inner="quartile",
palette=["lightblue", "lightpink"]);
###Output
_____no_output_____
###Markdown
Looking at this, we can see where the distributions of men and women differ: the split distributions of men in their 20s to 50s show a pronounced over-density toward lower splits when compared to women of the same age (or of any age, for that matter).Also surprisingly, the 80-year-old women seem to outperform *everyone* in terms of their split time. This is probably due to the fact that we're estimating the distribution from small numbers, as there are only a handful of runners in that range:
###Code
(data.age > 80).sum()
###Output
_____no_output_____
###Markdown
Back to the men with negative splits: who are these runners? Does this split fraction correlate with finishing quickly? We can plot this very easily. We'll use ``regplot``, which will automatically fit a linear regression to the data:
###Code
g = sns.lmplot('final_sec', 'split_frac', col='gender', data=data,
markers=".", scatter_kws=dict(color='c'))
g.map(plt.axhline, y=0.1, color="k", ls=":");
###Output
_____no_output_____ |
notebooks/feature-engineering/Section-09-Outlier-Engineering/09.02-Capping-IQR-proximity-rule.ipynb | ###Markdown
Outlier EngineeringAn outlier is a data point which is significantly different from the remaining data. “An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism.” [D. Hawkins. Identification of Outliers, Chapman and Hall , 1980].Statistics such as the mean and variance are very susceptible to outliers. In addition, **some Machine Learning models are sensitive to outliers** which may decrease their performance. Thus, depending on which algorithm we wish to train, we often remove outliers from our variables.We discussed in section 3 of this course how to identify outliers. In this section, we we discuss how we can process them to train our machine learning models. How can we pre-process outliers?- Trimming: remove the outliers from our dataset- Treat outliers as missing data, and proceed with any missing data imputation technique- Discrestisation: outliers are placed in border bins together with higher or lower values of the distribution- Censoring: capping the variable distribution at a max and / or minimum value**Censoring** is also known as:- top and bottom coding- winsorization- capping Censoring or Capping.**Censoring**, or **capping**, means capping the maximum and /or minimum of a distribution at an arbitrary value. On other words, values bigger or smaller than the arbitrarily determined ones are **censored**.Capping can be done at both tails, or just one of the tails, depending on the variable and the user.Check my talk in [pydata](https://www.youtube.com/watch?v=KHGGlozsRtA) for an example of capping used in a finance company.The numbers at which to cap the distribution can be determined:- arbitrarily- using the inter-quantal range proximity rule- using the gaussian approximation- using quantiles Advantages- does not remove data Limitations- distorts the distributions of the variables- distorts the relationships among variables In this DemoWe will see how to perform capping with the inter-quantile range proximity rule using the Boston House Dataset ImportantWhen doing capping, we tend to cap values both in train and test set. It is important to remember that the capping values MUST be derived from the train set. And then use those same values to cap the variables in the test setI will not do that in this demo, but please keep that in mind when setting up your pipelines
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# for Q-Q plots
import scipy.stats as stats
# boston house dataset for the demo
from sklearn.datasets import load_boston
from feature_engine.outlier_removers import Winsorizer
# load the the Boston House price data
# load the boston dataset from sklearn
boston_dataset = load_boston()
# create a dataframe with the independent variables
# I will use only 3 of the total variables for this demo
boston = pd.DataFrame(boston_dataset.data,
columns=boston_dataset.feature_names)[[
'RM', 'LSTAT', 'CRIM'
]]
# add the target
boston['MEDV'] = boston_dataset.target
boston.head()
# function to create histogram, Q-Q plot and
# boxplot. We learned this in section 3 of the course
def diagnostic_plots(df, variable):
# function takes a dataframe (df) and
# the variable of interest as arguments
# define figure size
plt.figure(figsize=(16, 4))
# histogram
plt.subplot(1, 3, 1)
sns.distplot(df[variable], bins=30)
plt.title('Histogram')
# Q-Q plot
plt.subplot(1, 3, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.ylabel('Variable quantiles')
# boxplot
plt.subplot(1, 3, 3)
sns.boxplot(y=df[variable])
plt.title('Boxplot')
plt.show()
# let's find outliers in RM
diagnostic_plots(boston, 'RM')
# visualise outliers in LSTAT
diagnostic_plots(boston, 'LSTAT')
# outliers in CRIM
diagnostic_plots(boston, 'CRIM')
###Output
_____no_output_____
###Markdown
There are outliers in all of the above variables. RM shows outliers in both tails, whereas LSTAT and CRIM only on the right tail.To find the outliers, let's re-utilise the function we learned in section 3:
###Code
def find_skewed_boundaries(df, variable, distance):
# Let's calculate the boundaries outside which sit the outliers
# for skewed distributions
# distance passed as an argument, gives us the option to
# estimate 1.5 times or 3 times the IQR to calculate
# the boundaries.
IQR = df[variable].quantile(0.75) - df[variable].quantile(0.25)
lower_boundary = df[variable].quantile(0.25) - (IQR * distance)
upper_boundary = df[variable].quantile(0.75) + (IQR * distance)
return upper_boundary, lower_boundary
# find limits for RM
RM_upper_limit, RM_lower_limit = find_skewed_boundaries(boston, 'RM', 1.5)
RM_upper_limit, RM_lower_limit
# limits for LSTAT
LSTAT_upper_limit, LSTAT_lower_limit = find_skewed_boundaries(boston, 'LSTAT', 1.5)
LSTAT_upper_limit, LSTAT_lower_limit
# limits for CRIM
CRIM_upper_limit, CRIM_lower_limit = find_skewed_boundaries(boston, 'CRIM', 1.5)
CRIM_upper_limit, CRIM_lower_limit
# Now let's replace the outliers by the maximum and minimum limit
boston['RM']= np.where(boston['RM'] > RM_upper_limit, RM_upper_limit,
np.where(boston['RM'] < RM_lower_limit, RM_lower_limit, boston['RM']))
# Now let's replace the outliers by the maximum and minimum limit
boston['LSTAT']= np.where(boston['LSTAT'] > LSTAT_upper_limit, LSTAT_upper_limit,
np.where(boston['LSTAT'] < LSTAT_lower_limit, LSTAT_lower_limit, boston['LSTAT']))
# Now let's replace the outliers by the maximum and minimum limit
boston['CRIM']= np.where(boston['CRIM'] > CRIM_upper_limit, CRIM_upper_limit,
np.where(boston['CRIM'] < CRIM_lower_limit, CRIM_lower_limit, boston['CRIM']))
# let's explore outliers in the trimmed dataset
# for RM we see much less outliers as in the original dataset
diagnostic_plots(boston, 'RM')
diagnostic_plots(boston, 'LSTAT')
diagnostic_plots(boston, 'CRIM')
###Output
_____no_output_____
###Markdown
We can see that the outliers are gone, but the variable distribution was distorted quite a bit. Censoring with feature-engine
###Code
# load the the Boston House price data
# load the boston dataset from sklearn
boston_dataset = load_boston()
# create a dataframe with the independent variables
# I will use only 3 of the total variables for this demo
boston = pd.DataFrame(boston_dataset.data,
columns=boston_dataset.feature_names)[[
'RM', 'LSTAT', 'CRIM'
]]
# add the target
boston['MEDV'] = boston_dataset.target
boston.head()
# create the capper
windsoriser = Winsorizer(distribution='skewed', # choose skewed for IQR rule boundaries or gaussian for mean and std
tail='both', # cap left, right or both tails
fold=1.5,
variables=['RM', 'LSTAT', 'CRIM'])
windsoriser.fit(boston)
boston_t = windsoriser.transform(boston)
diagnostic_plots(boston, 'RM')
diagnostic_plots(boston_t, 'RM')
# we can inspect the minimum caps for each variable
windsoriser.left_tail_caps_
# we can inspect the maximum caps for each variable
windsoriser.right_tail_caps_
###Output
_____no_output_____ |
notebooks/19_intro_to_machine_leaning.ipynb | ###Markdown
Machine LearningUp till now, we've seen the tools necessary to solve a variety of problems. These problems, however, need to have a **finite** number of steps and states, so that we can account for each one. We need to define a set of rules; rules we can code, so that in every possible scenario our algorithm comes up with an answer. With our current knowledge we can't solve a problem with an **indefinite** number of states (e.g. a chess match). These algorithms, we can write are called **deterministic**. In contrast, there is another category of algorithms called **non-deterministic**, whose response is not hard-coded and can differ from run to run, even on the same input. Machine Learning (ML) will help us with the latter.> Machine Learning explores the study and construction of algorithms that can learn from and make predictions on data.So how does ML attempt to solve complex problems? Much like humans do, through trial and error! It learns to make associations from the data itself, without having any expert define or dictate a set of rules. These are formed on their own, through a procedure we call **training**. Let's not get too ahead of ourselves.A more formal definition of Machine Learning is the following:> A computer program is said to learn from experience (*E*) with respect to some class of tasks (*T*) and performance measure (*P*) if its performance at tasks in *T*, as measured by *P*, improves with experience *E*.Let's try to break this down a bit:- The class of tasks (*T*), refers to the type of the problem (classification, clustering, etc.).- The performance measure (*P*) is a function that indicates how **well** the algorithm is doing in its task.- Experience (*E*), in the context of **training**, refers to the algorithm improving its performance on the task. Machine learning tasks fall into 3 broad categories:- **Supervised Learning**. Here the algorithm is presented with **labeled** data. It is the algorithm's job to associate the input with their labels. Classification and regression problems fall into this category.- **Unsupervised Learning**. The data in these types of problems has **no** labels. The algorithms job is to find patterns or clusters in the data. Clustering, density estimation and dimensionality reduction problems fall into this category.- **Reinforcement Learning**. The algorithm interacts with a dynamic environment in which it must perform a certain goal.The most popular category of Machine Learning is supervised learning. Supervised LearningIn this category we have a set of examples (or samples) $X$ and their labels (or targets) $Y$. The goal of the algorithm is to learn from $X$ and $Y$ in order to be able to predict the labels of future unseen examples.- If $Y$ is discrete, the problem we are trying to solve is called **classification**- If $Y$ is continuous, we are trying to solve a **regression** problem. Regression: Linear RegressionThe simplest problem we can solve is a linear regression problem.> In statistics, linear regression is an approach for modeling the relationship between a scalar dependent variable $y$ and one or more explanatory variables (or independent variables) denoted $x$.In the context of ML we usually refer to $x$ as a **training example** and $y$ as its **label**. Basically, we have $(x,y)$ data and try to find the line that fits this data the best.Let's define our problem: We'll take $100$ samples evenly distributed in $[0,100)$. These samples follow an underlying linear distribution but are infused with noise. The goal is to find a line that best *fits* the data.
###Code
# CODE:
# --------------------------------------------
from __future__ import print_function, division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Ensure reproducability
seed = 13
np.random.seed(seed)
# Construct data
x = np.linspace(0, 100, 100) # training examples
y = 2 * x + 10 * np.random.normal(size=100) # labels
# PLOTTING:
# --------------------------------------------
# Create figure
fig = plt.figure(figsize=(7, 5))
ax = plt.subplot(111)
# Scatter data points
ax.scatter(x, y, c='#1f77b4')
# Aesthetic parameters
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_title('Training examples and their labels')
###Output
_____no_output_____
###Markdown
As we previously said, we are essentially looking for a line that best *fits* the data. A line is defined as $y = w \cdot x + b$, so we need to figure out. We'll draw a few lines to see the differences:
###Code
# CODE:
# --------------------------------------------
# Line 1
w1 = 1
b1 = 20
y1 = w1 * x + b1
# Line 2
w2 = 4
b2 = -20
y2 = w2 * x + b2
# Line 3
w3 = -0.5
b3 = 150
y3 = w3 * x + b3
# PLOTTING:
# --------------------------------------------
# Create figure
fig = plt.figure(figsize=(7, 5))
ax = plt.subplot(111)
# Scatter data points
ax.scatter(x, y, c='#1f77b4', label='data points')
# Draw the three lines
ax.plot(x, y1, c='#ff7f0e', label='line 1')
ax.plot(x, y2, c='#e377c2', label='line 2')
ax.plot(x, y3, c='#2ca02c', label='line 3')
# Aesthetic parameters
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_ylim([-20, 220])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.legend(loc='lower right')
ax.set_title('Data points and random lines')
###Output
_____no_output_____
###Markdown
Now, to our question at hand. Which of these three lines best *fits* the data? Well, first it would help if we specify what we mean by the word *fits* more clearly. Or even better, if we can somehow **quantify** it. What we essentially need is a measure of how *close* the line is to the data. In the context of Machine Learning, we refer to this *measure* as a **performance metric**. This is one of the most important parts of machine learning, as it gives us a way of telling how *well* our algorithm is doing, or how *close* it is to reaching its goal; but most importantly is gives us a way to tell if our algorithm is *improving* or not!In this case we will select the [Mean Squared Error](https://en.wikipedia.org/wiki/Mean_squared_error) (MSE) as our performance metric:$$MSE = \frac{1}{N} \cdot \sum_{i=1}^N{\left( y_i - \hat y_i \right) ^2}$$where $N$ is the number of samples, $y_i$ is the label for data point $x_i$ and $\hat y_i$ is the *prediction* for the same data point.The smaller the MSE, the closer the line is our data.
###Code
# CODE:
# --------------------------------------------
def mse(y, y_hat):
"""
Calculates the Mean Squared Error between the labels (y) and the predictions (y_hat)
"""
return ((y - y_hat)**2).sum() / len(y)
print('line1 MSE:', mse(y, y1))
print('line2 MSE:', mse(y, y2))
print('line3 MSE:', mse(y, y3))
###Output
line1 MSE: 1802.4096380013298
line2 MSE: 9934.38656402079
line3 MSE: 5821.561278104836
###Markdown
Judging by this `line1` is the best of the three.Now that we've clearly defined our goal (i.e. to achieve the lowest possible MSE), we can move on to creating a Linear Regression model that will do exactly that, find the line that minimizes the MSE.The first step in most Machine Learning algorithms is to **initialize** them, or set a starting point. This can be done simply by selecting random values for our two parameters $w$ and $b$.As a note here, the parameters $w$ and $b$ are referred to as **weights** and **biases**, while the output of the model (in this case $\hat y = w \cdot x + b$ is called a **prediction** or **hypothesis**.
###Code
# CODE:
# --------------------------------------------
np.random.seed(seed)
# Initialize w and b randomly
w = np.random.random()
b = np.random.random()
# Create a function that makes predictions based on the weights and biases
def predict(x):
"""
Returns the predictions for x, based on the weights (w) and the biases (b)
"""
return w * x + b
# Generate a prediction
y_hat = predict(x)
# PLOTTING:
# --------------------------------------------
# Create figure
fig = plt.figure(figsize=(7, 5))
ax = plt.subplot(111)
# Scatter data points
ax.scatter(x, y, c='#1f77b4', label='data points')
# Draw the prediction
ax.plot(x, y_hat, c='#ff7f0e', label='prediction')
# Aesthetic parameters
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_ylim([-20, 220])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.legend(loc='lower right')
ax.set_title('Linear Regression (random initialization)')
###Output
_____no_output_____
###Markdown
Initially, as we can see, the model isn't fairing so well.Now, begins the **training phase** of the algorithm, where it will begin improving until some criterion is met. The performance metric that is used to improve the algorithm's performance upon is called a **cost** (or **loss**) **function**. Thus, our goal is to **minimize** this cost function.If we look at this in a bit more detail, the cost function (denoted as $J$) is a function with two parameters: $w$ and $b$:$$J(w, b) = \frac{1}{2N} \cdot \sum_{i=1}^N{\left( y_i - \hat y_i \right) ^2} = \frac{1}{2N} \cdot \sum_{i=1}^N{\left( y_i - w \cdot x_i - b \right) ^2}$$To get a better understanding of how the cost function works, we'll first see the impact each parameter has on it, while keeping the other constant.
###Code
# CODE:
# --------------------------------------------
# calculate current cost
J = mse(y, y_hat)
# calculate the cost for different values of w
w_range = np.arange(0, 4, 0.05)
y_w = [v * x + b for v in w_range]
J_w = [mse(y, v) for v in y_w]
w_best = w_range[np.argmin(J_w)]
# calculate the cost for different values of b
b_range = np.arange(-10, 130, 1)
y_b = [w * x + v for v in b_range]
J_b = [mse(y, v) for v in y_b]
b_best = b_range[np.argmin(J_b)]
# PLOTTING:
# --------------------------------------------
# Create figure
fig = plt.figure(figsize=(12, 6))
# Subplot 1
ax1 = plt.subplot(121)
# Draw artists for subplot 1
ax1.plot(w_range, J_w, c='#1f77b4', zorder=-1) # cost curve
ax1.scatter(w, J, c='#ff7f0e', s=50, edgecolor='#1f77b4') # current w
ax1.scatter(w_best, min(J_w), c='#e377c2', s=50, edgecolor='#1f77b4') # best w
ax1.annotate('current $w$', xy=(w, J), xytext=(1.5, 7000),
arrowprops=dict(arrowstyle='->',
connectionstyle="angle3,angleA=60,angleB=15"))
ax1.annotate('best $w$', xy=(w_best, min(J_w)), xytext=(1.5, 2000),
arrowprops=dict(arrowstyle='->',
connectionstyle="angle3,angleA=0,angleB=-90"))
# Subplot 1 - aesthetic parameters
ax1.set_xlabel('$w$')
ax1.set_ylabel('$J$')
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
ax1.yaxis.set_ticks_position('left')
ax1.xaxis.set_ticks_position('bottom')
ax1.set_title('Cost with respect to $w$ ')
# Subplot 2
ax2 = plt.subplot(122)
# Draw artists for subplot 2
ax2.plot(b_range, J_b, c='#1f77b4', zorder=-1)
ax2.scatter(b, J, c='#ff7f0e', s=50, edgecolor='#1f77b4')
ax2.scatter(b_best, min(J_b), c='#e377c2', s=50, edgecolor='#1f77b4')
ax2.annotate('current $b$', xy=(b, J), xytext=(40, 4000),
arrowprops=dict(arrowstyle='->',
connectionstyle="angle3,angleA=-50,angleB=0"))
ax2.annotate('best $b$', xy=(b_best, min(J_b)), xytext=(50, 2000),
arrowprops=dict(arrowstyle='->',
connectionstyle="angle3,angleA=0,angleB=-90"))
# Subplot 2 - aesthetic parameters
ax2.set_xlabel('$b$')
ax2.set_ylabel('$J$')
ax2.spines['right'].set_visible(False)
ax2.spines['top'].set_visible(False)
ax2.yaxis.set_ticks_position('left')
ax2.xaxis.set_ticks_position('bottom')
ax2.set_title('Cost with respect to $b$ ')
###Output
_____no_output_____
###Markdown
The two figures above illustrate how the cost changes with respect to each of the two variables. Our starting position is also depicted in the two figures, as well as our *goal* (the value of each parameter that minimizes the cost function). In this problem it isn't so hard to calculate the cost for every parameter and draw the curves, however this is **impossible** in more complex problems. Furthermore, the previous figures assume we optimize each parameter independent of the other. Preferrably, we'd want to optimize them together. In the figure above, the darker the color the lower the value of the cost function. Again, we need a way to navigate from the *current position* (acquired from the random initialization of our two parameters $w$ and $b$) to the *best position* (a position that is unknown in real world problems).One way to tackle this is through **[Gradient Descent](https://en.wikipedia.org/wiki/Gradient_descent)**.How does this work? By computing the **gradients of the cost function w.r.t each of the parameters**, we are essentially calculating the *slope* of this function at our current position. The slope, in turn, shows us the *direction* that will reduce the cost function's value! The two partial derivatives we need to compute are the following:$$ \frac{dJ}{dw} \quad and \quad \frac{dJ}{db} $$Afterwards, we need to change the values of our parameters $w$ and $b$, in such a way to *move in that direction*. This change is called an **update**.$$ w^{new} \leftarrow w + \lambda \cdot \frac{dJ}{dw} \quad and \quad b^{new} \leftarrow b + \lambda \cdot \frac{dJ}{db}$$After the first update, a new prediction is made (using the new values of our two parameters), the new cost is calculated, the derivatives are computed once again and a new update is made. These steps are repeated again and again, until the cost function stops dropping. This procedure is referred to as the **training phase**.Another term we use in machine learning is the term **epoch**. An epoch is when an algorithm has *seen* all of the training data once and has updated its parameters accordingly. In this case, an epoch is concluded each time the weights are updated. An example training phase can be seen in the figure below.The $\lambda$ parameter we saw before is called the **learning rate** and dictates how *large* will each update will be. Too small and we will require many steps to reach our goal; too large and we might *overshoot* the minima and the algorithm might never converge. This can be seen in the figure below:The partial derivatives in linear regression are:$$\frac{dJ}{dw} = - \frac{2}{N} \cdot \sum_{i=1}^N x_i \left(y_i - w \cdot x_i - b \right)$$$$\frac{dJ}{db} = - \frac{2}{N} \cdot \sum_{i=1}^N \left(y_i - w \cdot x_i - b \right) $$
###Code
# CODE:
# --------------------------------------------
# Create two functions that will help us train the algorithm
def compute_derivatives(x, y):
"""
First generate a prediction for x and then compute the derivatives of
the cost function with respect to the weights (w) and the biases (b).
"""
y_hat = predict(x)
dw = - (2 / len(x)) * sum(x * (y - y_hat))
db = - (2 / len(x)) * sum(y - y_hat)
return dw, db
def update(x, y, lr=0.00005):
"""
Generates a prediction for x, computes the partial derivatives of the
cost function and uses them to update the values of the weights (w)
and biases (b) according to learning rate (lr). It doesn't overwrite
the old parameters; instead it returns the new values.
"""
dw, db = compute_derivatives(x, y)
new_w = w - (lr * dw)
new_b = b - (lr * db)
return new_w, new_b
# The initial weights and biases are stored in the variables 'w' and 'b'.
# We'll now calculate the weights and biases of the second epoch
# (after the first update)
w1, b1 = w, b # parameters of the 1st epoch
y1 = predict(x) # initial prediction
J1 = mse(y, y1) # initial cost
w, b = update(x, y) # overwrite the old parameters
# Same thing for the third and fourth epochs
w2, b2 = w, b # parameters of the 2nd epoch
y2 = predict(x) # 2nd epoch prediction
J2 = mse(y, y2) # 2nd epoch cost
w, b = update(x, y)
w3, b3 = w, b # parameters of the 3rd epoch
y3 = predict(x) # 3rd epoch prediction
J3 = mse(y, y3) # 3rd epoch cost
w, b = update(x, y)
w4, b4 = w, b # parameters of the 4th epoch
y4 = predict(x) # 4th epoch prediction
J4 = mse(y, y4) # 4th epoch cost
# PLOTTING:
# --------------------------------------------
# Create figure
fig = plt.figure(figsize=(7, 5))
ax = plt.subplot(111)
# Scatter data points
ax.scatter(x, y, c='#1f77b4', label='data points')
# Draw the predictions for the first four epochs
ax.plot(x, y1, c='#ff7f0e', label='1st epoch', alpha=1/8)
ax.plot(x, y2, c='#ff7f0e', label='2nd epoch', alpha=1/4)
ax.plot(x, y3, c='#ff7f0e', label='3rd epoch', alpha=1/2)
ax.plot(x, y4, c='#ff7f0e', label='4th epoch')
# Write the cost of each prediction next to it
ax.text(len(x)+1, y1[-1], str(int(J1)), color='#ff7f0e', alpha=1/8)
ax.text(len(x)+1, y2[-1], str(int(J2)), color='#ff7f0e', alpha=1/4)
ax.text(len(x)+1, y3[-1], str(int(J3)), color='#ff7f0e', alpha=1/2)
ax.text(len(x)+1, y4[-1], str(int(J4)), color='#ff7f0e')
# Aesthetic parameters
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_ylim([-20, 220])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.legend(loc='lower right')
ax.set_title('Linear Regression (first four epochs)')
###Output
_____no_output_____
###Markdown
It's clear that the model is improving after each epoch, evident by the fact that both the prediction approaches what it should be and that the cost is dropping.Now, we can finally put it all together and create a Linear Regression class.
###Code
# CODE:
# --------------------------------------------
class LinearRegression:
def __init__(self, epochs=100, learning_rate=0.00005, random_seed=13):
"""
This class creates a Linear Regression model and attempts
to fit it to the given data through gradient descent.
:param epoch (int): The number of epochs.
:param learning_rate (float): The learning rate of the algorithm.
:param random_seed (int): A number to used as the seed for the random number generator.
"""
self.epochs = epochs
self.lr = learning_rate
self.w, self.b = self.initialize(random_seed)
self.w_history = []
self.b_history = []
def initialize(self, seed):
"""
Method that initializes the weights and biases to random values.
"""
np.random.seed(seed)
w = np.random.random()
b = np.random.random()
return w, b
def predict(self, x):
"""
Method that makes predictions for a number of points.
"""
return self.w * x + self.b
def cost(self, x, y):
"""
Method that calculates the cost of the prediction on a series of data points.
"""
y_hat = self.predict(x)
return sum(((y - y_hat)**2)) / len(y)
def update(self, x, y):
"""
Method that runs one iteration of gradient descent and updates the class' weights and biases
"""
y_hat = self.predict(x)
dw = - (2 / len(x)) * sum(x * (y - y_hat))
db = - (2 / len(x)) * sum(y - y_hat)
self.w -= (self.lr * dw)
self.b -= (self.lr * db)
def fit(self, x, y):
"""
Method that handles the whole training procedure.
"""
for ep in range(self.epochs):
self.w_history.append(self.w)
self.b_history.append(self.b)
self.update(x, y)
###Output
_____no_output_____
###Markdown
Let's see if it works.
###Code
# CODE:
# --------------------------------------------
model = LinearRegression(epochs=10)
model.fit(x, y)
# PLOTTING:
# --------------------------------------------
# Create figure
fig = plt.figure(figsize=(7, 5))
ax = plt.subplot(111)
# Scatter data points
ax.scatter(x, y, c='#1f77b4', label='data points')
# Draw the predictions for the first four epochs
predictions = [model.w_history[i] * x + model.b_history[i] for i in range(len(model.w_history))]
for i in range(len(model.w_history)):
ax.plot(x, predictions[i], c='#ff7f0e', alpha=i/len(predictions))
# Aesthetic parameters
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_ylim([-20, 220])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_title('Linear Regression (first {} epochs)'.format(model.epochs))
###Output
_____no_output_____
###Markdown
Multi-variable Linear RegressionWhat happens if we have more than one input variables? Not much actually changes, apart from the fact that we now have a separate weight for each input variable. Variables are often referred to as **features** in Machine Learning.Suppose we have a dataset of $N$ training examples, each consisting of $M$ features. We could represent the training data as an $N \times M$ array $X$: $$X = \left( \begin{array}{cccc}x_{11} & x_{12} & ... & x_{1M} \\x_{21} & x_{22} & ... & x_{2M} \\... & ... & ... & ... \\x_{N1} & x_{N2} & ... & x_{NM} \end{array} \right)$$Each example ($X_i$) is accompanied by a label ($y_i$), like before:$$y = \left( \begin{array}{c}y_1 \\y_2 \\... \\y_N\end{array} \right)$$Each prediction is essentially a linear combination of all the features for the input example. **Each feature has its own weight**:$$\hat y_i = x_{i1} \cdot w_{1} + x_{i2} \cdot w_{2} + ... + x_{iM} \cdot w_{M} + b$$The whole prediction array would look like this:$$\hat y = X \cdot W + b = \left( \begin{array}{cccc}x_{11} & x_{12} & ... & x_{1M} \\x_{21} & x_{22} & ... & x_{2M} \\... & ... & ... & ... \\x_{N1} & x_{N2} & ... & x_{NM} \end{array} \right) \cdot\left( \begin{array}{cccc}w_1 \\w_2 \\... \\w_M\end{array} \right) + b$$The final $+$ operation is possible through broadcasting. The mathematical equivalent would be if $b$ was a $1 \times N$ array $\left( \begin{array}{cccc} b & b & ... & b \end{array} \right)^T$. Linear Regression DiscussionLinear regression is a very simple algorithm that usually doesn't work well in real world applications. This is because it makes a lot of **assumptions** for the data. Some of these are:- First of all, it assumes a **linear relationship** between the data and their labels. As a result it cannot sufficiently model non-linear problems.- Secondly, in the case of multiple input features, it assumes little to no **multicollinearity** in them. This means that the input features shouldn't be highly correlated with each other.- A third assumption made is that there isn't any **autocorrelation** in the data. Autocorrelation occurs when the labels are not independent from one another (e.g. in time-series each label is dependent on its previous values).- Another assumption on the data is **homoscedasticity**. This means that the variance of the label stays the same through all training examples.These are all very strong assumptions, making Linear Regression ill-suited for many real world applications where some of these assumptions are violated. Thus, we are forced to look for stronger algorithms, capable of modelling more complex problems.Before moving on, it's worth mentioning two extensions to Linear Regression, called [Lasso][1] and [Ridge](https://en.wikipedia.org/wiki/Tikhonov_regularization) regressions. [1]: https://en.wikipedia.org/wiki/Lasso_(statistics) Classification: Logistic RegressionContrary to regression, in classification the labels are a set of **discrete** values.We'll try to solve a problem, where we want to classify *bananas* and *oranges* according to their length. These two are called **classes**. When we have only two classes, we refer to it as a **binary classification** problem.
###Code
# CODE:
# --------------------------------------------
np.random.seed(5)
n = 100 # number of examples
x = np.concatenate([(5 * np.random.random(n) + 3), (6 * np.random.random(n) + 7)]) # training examples
c = (['orange'] * int(len(x)/2)) + (['banana'] * int(len(x)/2)) # class labels
c_enc = np.array([0] * n + [1] * n) # encode the labels to 0 - 1
df = pd.DataFrame({'x': x, 'c': c, 'y': c_enc})
df.sort_values('x', inplace=True)
# PLOTTING:
# --------------------------------------------
# Create a subplot and scatter the data points
ax = plt.subplot(111)
ax.scatter(x, c_enc)
# Set plot limits
ax.set_xlim([-0.3 + x.min(), x.max() + 0.25])
ax.set_ylim([-0.1, 1.4])
# Set custom labels on the y axis
ax.set_yticks([0, 1])
# Set x and y axis labels
ax.set_xlabel('length (cm)')
ax.set_ylabel('y')
# Rest aesthetic parameters
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_title('Binary Classification Task')
###Output
_____no_output_____
###Markdown
Since we know how linear regression works, we'll try to use that to solve our binary classification task. First, let's fit a linear regression on the data.
###Code
# CODE:
# --------------------------------------------
model = LinearRegression(epochs=50, learning_rate=0.01)
model.b = -0.6 # because the code isn't optimal, this is used to ensure convergeance
model.fit(x, c_enc)
preds = model.predict(x)
df['pred_lr'] = model.predict(df[['x']])
# PLOTTING:
# --------------------------------------------
# Draw the data and the linear regression line
ax = plt.subplot(111)
ax.scatter(x, c_enc, label='data points')
ax.plot(x, preds, c='#ff7f0e', label='linear regression')
# Aesthetic parameters
ax.set_xlabel('length (cm)')
ax.set_ylabel('y')
ax.set_yticks([0, 1])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.legend(loc='lower right')
ax.set_title('Linear Regression on binary data')
###Output
_____no_output_____
###Markdown
Now by applying a threshold (let's say at $0.5$, which is the middle of the encoded $y$ values), we could use the value of the regression line to classify the given examples.
###Code
# PLOTTING:
# --------------------------------------------
# Scatter the data poitns, the linear regression line, a horizontal line depicting the threshold value
# and the resulting line of the threshold.
ax = plt.subplot(111)
ax.scatter(df.x, df.y, label='data points')
ax.plot(df.x, df.pred_lr, color='#ff7f0e', alpha=0.3, label='linear regression')
ax.plot([df.x.min(), df.x.max()], [0.5, 0.5], color='0.5', alpha=0.6, label='threshold', linestyle='--')
ax.plot(df.x, np.where(df.pred_lr > 0.5, 1, 0), color='#ff7f0e', lw=2, label='thresholded regression')
# Add a text box above the threshold line
ax.text(4, 0.55, 'threshold = $0.5$', color='0.5')
# Aesthetic parameters
ax.set_xlim([-0.3 + df.x.min(), df.x.max() + 0.25])
ax.set_ylim([-0.1, 1.4])
ax.set_xlabel('length (cm)')
ax.set_ylabel('y')
ax.set_yticks([0, 1])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.legend()
ax.set_title('Applying a threshold to Linear Regression line')
###Output
_____no_output_____
###Markdown
Machine LearningUp till now, we've seen the tools necessary to solve a variety of problems. These problems, however, need to have a **finite** number of steps and states, so that we can account for each one. We need to define a set of rules; rules we can code, so that in every possible scenario our algorithm comes up with an answer. With our current knowledge we can't solve a problem with an **indefinite** number of states (e.g. a chess match). These algorithms, we can write are called **deterministic**. In contrast, there is another category of algorithms called **non-deterministic**, whose response is not hard-coded and can differ from run to run, even on the same input. Machine Learning (ML) will help us with the latter.> Machine Learning explores the study and construction of algorithms that can learn from and make predictions on data.So how does ML attempt to solve complex problems? Much like humans do, through trial and error! It learns to make associations from the data itself, without having any expert define or dictate a set of rules. These are formed on their own, through a procedure we call **training**. Let's not get too ahead of ourselves.A more formal definition of Machine Learning is the following:> A computer program is said to learn from experience (*E*) with respect to some class of tasks (*T*) and performance measure (*P*) if its performance at tasks in *T*, as measured by *P*, improves with experience *E*.Let's try to break this down a bit:- The class of tasks (*T*), refers to the type of the problem (classification, clustering, etc.).- The performance measure (*P*) is a function that indicates how **well** the algorithm is doing in its task.- Experience (*E*), in the context of **training**, refers to the algorithm improving its performance on the task. Machine learning tasks fall into 3 broad categories:- **Supervised Learning**. Here the algorithm is presented with **labeled** data. It is the algorithm's job to associate the input with their labels. Classification and regression problems fall into this category.- **Unsupervised Learning**. The data in these types of problems has **no** labels. The algorithms job is to find patterns or clusters in the data. Clustering, density estimation and dimensionality reduction problems fall into this category.- **Reinforcement Learning**. The algorithm interacts with a dynamic environment in which it must perform a certain goal.The most popular category of Machine Learning is supervised learning. Supervised LearningIn this category we have a set of examples (or samples) $X$ and their labels (or targets) $Y$. The goal of the algorithm is to learn from $X$ and $Y$ in order to be able to predict the labels of future unseen examples.- If $Y$ is discrete, the problem we are trying to solve is called **classification**- If $Y$ is continuous, we are trying to solve a **regression** problem. Regression: Linear RegressionThe simplest problem we can solve is a linear regression problem.> In statistics, linear regression is an approach for modeling the relationship between a scalar dependent variable $y$ and one or more explanatory variables (or independent variables) denoted $x$.In the context of ML we usually refer to $x$ as a **training example** and $y$ as its **label**. Basically, we have $(x,y)$ data and try to find the line that fits this data the best.Let's define our problem: We'll take $100$ samples evenly distributed in $[0,100)$. These samples follow an underlying linear distribution but are infused with noise. The goal is to find a line that best *fits* the data.
###Code
# CODE:
# --------------------------------------------
from __future__ import print_function, division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Ensure reproducability
seed = 13
np.random.seed(seed)
# Construct data
x = np.linspace(0, 100, 100) # training examples
y = 2 * x + 10 * np.random.normal(size=100) # labels
# PLOTTING:
# --------------------------------------------
# Create figure
fig = plt.figure(figsize=(7, 5))
ax = plt.subplot(111)
# Scatter data points
ax.scatter(x, y, c='#1f77b4')
# Aesthetic parameters
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_title('Training examples and their labels')
###Output
_____no_output_____
###Markdown
As we previously said, we are essentially looking for a line that best *fits* the data. A line is defined as $y = w \cdot x + b$, so we need to figure out. We'll draw a few lines to see the differences:
###Code
# CODE:
# --------------------------------------------
# Line 1
w1 = 1
b1 = 20
y1 = w1 * x + b1
# Line 2
w2 = 4
b2 = -20
y2 = w2 * x + b2
# Line 3
w3 = -0.5
b3 = 150
y3 = w3 * x + b3
# PLOTTING:
# --------------------------------------------
# Create figure
fig = plt.figure(figsize=(7, 5))
ax = plt.subplot(111)
# Scatter data points
ax.scatter(x, y, c='#1f77b4', label='data points')
# Draw the three lines
ax.plot(x, y1, c='#ff7f0e', label='line 1')
ax.plot(x, y2, c='#e377c2', label='line 2')
ax.plot(x, y3, c='#2ca02c', label='line 3')
# Aesthetic parameters
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_ylim([-20, 220])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.legend(loc='lower right')
ax.set_title('Data points and random lines')
###Output
_____no_output_____
###Markdown
Now, to our question at hand. Which of these three lines best *fits* the data? Well, first it would help if we specify what we mean by the word *fits* more clearly. Or even better, if we can somehow **quantify** it. What we essentially need is a measure of how *close* the line is to the data. In the context of Machine Learning, we refer to this *measure* as a **performance metric**. This is one of the most important parts of machine learning, as it gives us a way of telling how *well* our algorithm is doing, or how *close* it is to reaching its goal; but most importantly is gives us a way to tell if our algorithm is *improving* or not!In this case we will select the [Mean Squared Error](https://en.wikipedia.org/wiki/Mean_squared_error) (MSE) as our performance metric:$$MSE = \frac{1}{N} \cdot \sum_{i=1}^N{\left( y_i - \hat y_i \right) ^2}$$where $N$ is the number of samples, $y_i$ is the label for data point $x_i$ and $\hat y_i$ is the *prediction* for the same data point.The smaller the MSE, the closer the line is our data.
###Code
# CODE:
# --------------------------------------------
def mse(y, y_hat):
"""
Calculates the Mean Squared Error between the labels (y) and the predictions (y_hat)
"""
return ((y - y_hat)**2).sum() / len(y)
print('line1 MSE:', mse(y, y1))
print('line2 MSE:', mse(y, y2))
print('line3 MSE:', mse(y, y3))
###Output
line1 MSE: 1802.4096380013298
line2 MSE: 9934.38656402079
line3 MSE: 5821.561278104836
###Markdown
Judging by this `line1` is the best of the three.Now that we've clearly defined our goal (i.e. to achieve the lowest possible MSE), we can move on to creating a Linear Regression model that will do exactly that, find the line that minimizes the MSE.The first step in most Machine Learning algorithms is to **initialize** them, or set a starting point. This can be done simply by selecting random values for our two parameters $w$ and $b$.As a note here, the parameters $w$ and $b$ are referred to as **weights** and **biases**, while the output of the model (in this case $\hat y = w \cdot x + b$ is called a **prediction** or **hypothesis**.
###Code
# CODE:
# --------------------------------------------
np.random.seed(seed)
# Initialize w and b randomly
w = np.random.random()
b = np.random.random()
# Create a function that makes predictions based on the weights and biases
def predict(x):
"""
Returns the predictions for x, based on the weights (w) and the biases (b)
"""
return w * x + b
# Generate a prediction
y_hat = predict(x)
# PLOTTING:
# --------------------------------------------
# Create figure
fig = plt.figure(figsize=(7, 5))
ax = plt.subplot(111)
# Scatter data points
ax.scatter(x, y, c='#1f77b4', label='data points')
# Draw the prediction
ax.plot(x, y_hat, c='#ff7f0e', label='prediction')
# Aesthetic parameters
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_ylim([-20, 220])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.legend(loc='lower right')
ax.set_title('Linear Regression (random initialization)')
###Output
_____no_output_____
###Markdown
Initially, as we can see, the model isn't fairing so well.Now, begins the **training phase** of the algorithm, where it will begin improving until some criterion is met. The performance metric that is used to improve the algorithm's performance upon is called a **cost** (or **loss**) **function**. Thus, our goal is to **minimize** this cost function.If we look at this in a bit more detail, the cost function (denoted as $J$) is a function with two parameters: $w$ and $b$:$$J(w, b) = \frac{1}{2N} \cdot \sum_{i=1}^N{\left( y_i - \hat y_i \right) ^2} = \frac{1}{2N} \cdot \sum_{i=1}^N{\left( y_i - w \cdot x_i - b \right) ^2}$$To get a better understanding of how the cost function works, we'll first see the impact each parameter has on it, while keeping the other constant.
###Code
# CODE:
# --------------------------------------------
# calculate current cost
J = mse(y, y_hat)
# calculate the cost for different values of w
w_range = np.arange(0, 4, 0.05)
y_w = [v * x + b for v in w_range]
J_w = [mse(y, v) for v in y_w]
w_best = w_range[np.argmin(J_w)]
# calculate the cost for different values of b
b_range = np.arange(-10, 130, 1)
y_b = [w * x + v for v in b_range]
J_b = [mse(y, v) for v in y_b]
b_best = b_range[np.argmin(J_b)]
# PLOTTING:
# --------------------------------------------
# Create figure
fig = plt.figure(figsize=(12, 6))
# Subplot 1
ax1 = plt.subplot(121)
# Draw artists for subplot 1
ax1.plot(w_range, J_w, c='#1f77b4', zorder=-1) # cost curve
ax1.scatter(w, J, c='#ff7f0e', s=50, edgecolor='#1f77b4') # current w
ax1.scatter(w_best, min(J_w), c='#e377c2', s=50, edgecolor='#1f77b4') # best w
ax1.annotate('current $w$', xy=(w, J), xytext=(1.5, 7000),
arrowprops=dict(arrowstyle='->',
connectionstyle="angle3,angleA=60,angleB=15"))
ax1.annotate('best $w$', xy=(w_best, min(J_w)), xytext=(1.5, 2000),
arrowprops=dict(arrowstyle='->',
connectionstyle="angle3,angleA=0,angleB=-90"))
# Subplot 1 - aesthetic parameters
ax1.set_xlabel('$w$')
ax1.set_ylabel('$J$')
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
ax1.yaxis.set_ticks_position('left')
ax1.xaxis.set_ticks_position('bottom')
ax1.set_title('Cost with respect to $w$ ')
# Subplot 2
ax2 = plt.subplot(122)
# Draw artists for subplot 2
ax2.plot(b_range, J_b, c='#1f77b4', zorder=-1)
ax2.scatter(b, J, c='#ff7f0e', s=50, edgecolor='#1f77b4')
ax2.scatter(b_best, min(J_b), c='#e377c2', s=50, edgecolor='#1f77b4')
ax2.annotate('current $b$', xy=(b, J), xytext=(40, 4000),
arrowprops=dict(arrowstyle='->',
connectionstyle="angle3,angleA=-50,angleB=0"))
ax2.annotate('best $b$', xy=(b_best, min(J_b)), xytext=(50, 2000),
arrowprops=dict(arrowstyle='->',
connectionstyle="angle3,angleA=0,angleB=-90"))
# Subplot 2 - aesthetic parameters
ax2.set_xlabel('$b$')
ax2.set_ylabel('$J$')
ax2.spines['right'].set_visible(False)
ax2.spines['top'].set_visible(False)
ax2.yaxis.set_ticks_position('left')
ax2.xaxis.set_ticks_position('bottom')
ax2.set_title('Cost with respect to $b$ ')
###Output
_____no_output_____
###Markdown
The two figures above illustrate how the cost changes with respect to each of the two variables. Our starting position is also depicted in the two figures, as well as our *goal* (the value of each parameter that minimizes the cost function). In this problem it isn't so hard to calculate the cost for every parameter and draw the curves, however this is **impossible** in more complex problems. Furthermore, the previous figures assume we optimize each parameter independent of the other. Preferrably, we'd want to optimize them together. In the figure above, the darker the color the lower the value of the cost function. Again, we need a way to navigate from the *current position* (acquired from the random initialization of our two parameters $w$ and $b$) to the *best position* (a position that is unknown in real world problems).One way to tackle this is through **[Gradient Descent](https://en.wikipedia.org/wiki/Gradient_descent)**.How does this work? By computing the **gradients of the cost function w.r.t each of the parameters**, we are essentially calculating the *slope* of this function at our current position. The slope, in turn, shows us the *direction* that will reduce the cost function's value! The two partial derivatives we need to compute are the following:$$ \frac{dJ}{dw} \quad and \quad \frac{dJ}{db} $$Afterwards, we need to change the values of our parameters $w$ and $b$, in such a way to *move in that direction*. This change is called an **update**.$$ w^{new} \leftarrow w + \lambda \cdot \frac{dJ}{dw} \quad and \quad b^{new} \leftarrow b + \lambda \cdot \frac{dJ}{db}$$After the first update, a new prediction is made (using the new values of our two parameters), the new cost is calculated, the derivatives are computed once again and a new update is made. These steps are repeated again and again, until the cost function stops dropping. This procedure is referred to as the **training phase**.Another term we use in machine learning is the term **epoch**. An epoch is when an algorithm has *seen* all of the training data once and has updated its parameters accordingly. In this case, an epoch is concluded each time the weights are updated. An example training phase can be seen in the figure below.The $\lambda$ parameter we saw before is called the **learning rate** and dictates how *large* will each update will be. Too small and we will require many steps to reach our goal; too large and we might *overshoot* the minima and the algorithm might never converge. This can be seen in the figure below:The partial derivatives in linear regression are:$$\frac{dJ}{dw} = - \frac{2}{N} \cdot \sum_{i=1}^N x_i \left(y_i - w \cdot x_i - b \right)$$$$\frac{dJ}{db} = - \frac{2}{N} \cdot \sum_{i=1}^N \left(y_i - w \cdot x_i - b \right) $$
###Code
# CODE:
# --------------------------------------------
# Create two functions that will help us train the algorithm
def compute_derivatives(x, y):
"""
First generate a prediction for x and then compute the derivatives of
the cost function with respect to the weights (w) and the biases (b).
"""
y_hat = predict(x)
dw = - (2 / sum(x)) * sum(x * (y - y_hat))
db = - (2 / sum(x)) * sum(y - y_hat)
return dw, db
def update(x, y, lr=0.001):
"""
Generates a prediction for x, computes the partial derivatives of the
cost function and uses them to update the values of the weights (w)
and biases (b) according to learning rate (lr). It doesn't overwrite
the old parameters; instead it returns the new values.
"""
dw, db = compute_derivatives(x, y)
new_w = w - (lr * dw)
new_b = b - (lr * db)
return new_w, new_b
# The initial weights and biases are stored in the variables 'w' and 'b'.
# We'll now calculate the weights and biases of the second epoch
# (after the first update)
w1, b1 = w, b # parameters of the 1st epoch
y1 = predict(x) # initial prediction
J1 = mse(y, y1) # initial cost
w, b = update(x, y) # overwrite the old parameters
# Same thing for the third and fourth epochs
w2, b2 = w, b # parameters of the 2nd epoch
y2 = predict(x) # 2nd epoch prediction
J2 = mse(y, y2) # 2nd epoch cost
w, b = update(x, y)
w3, b3 = w, b # parameters of the 3rd epoch
y3 = predict(x) # 3rd epoch prediction
J3 = mse(y, y3) # 3rd epoch cost
w, b = update(x, y)
w4, b4 = w, b # parameters of the 4th epoch
y4 = predict(x) # 4th epoch prediction
J4 = mse(y, y4) # 4th epoch cost
# PLOTTING:
# --------------------------------------------
# Create figure
fig = plt.figure(figsize=(7, 5))
ax = plt.subplot(111)
# Scatter data points
ax.scatter(x, y, c='#1f77b4', label='data points')
# Draw the predictions for the first four epochs
ax.plot(x, y1, c='#ff7f0e', label='1st epoch', alpha=1/8)
ax.plot(x, y2, c='#ff7f0e', label='2nd epoch', alpha=1/4)
ax.plot(x, y3, c='#ff7f0e', label='3rd epoch', alpha=1/2)
ax.plot(x, y4, c='#ff7f0e', label='4th epoch')
# Write the cost of each prediction next to it
ax.text(len(x)+1, y1[-1], str(int(J1)), color='#ff7f0e', alpha=1/8)
ax.text(len(x)+1, y2[-1], str(int(J2)), color='#ff7f0e', alpha=1/4)
ax.text(len(x)+1, y3[-1], str(int(J3)), color='#ff7f0e', alpha=1/2)
ax.text(len(x)+1, y4[-1], str(int(J4)), color='#ff7f0e')
# Aesthetic parameters
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_ylim([-20, 220])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.legend(loc='lower right')
ax.set_title('Linear Regression (first four epochs)')
###Output
_____no_output_____
###Markdown
It's clear that the model is improving after each epoch, evident by the fact that both the prediction approaches what it should be and that the cost is dropping.Now, we can finally put it all together and create a Linear Regression class.
###Code
# CODE:
# --------------------------------------------
class LinearRegression:
def __init__(self, epochs=100, learning_rate=0.001, random_seed=13):
"""
This class creates a Linear Regression model and attempts
to fit it to the given data through gradient descent.
:param epoch (int): The number of epochs.
:param learning_rate (float): The learning rate of the algorithm.
:param random_seed (int): A number to used as the seed for the random number generator.
"""
self.epochs = epochs
self.lr = learning_rate
self.w, self.b = self.initialize(random_seed)
self.w_history = []
self.b_history = []
def initialize(self, seed):
"""
Method that initializes the weights and biases to random values.
"""
np.random.seed(seed)
w = np.random.random()
b = np.random.random()
return w, b
def predict(self, x):
"""
Method that makes predictions for a number of points.
"""
return self.w * x + self.b
def cost(self, x, y):
"""
Method that calculates the cost of the prediction on a series of data points.
"""
y_hat = self.predict(x)
return sum(((y - y_hat)**2)) / len(y)
def update(self, x, y):
"""
Method that runs one iteration of gradient descent and updates the class' weights and biases
"""
y_hat = self.predict(x)
dw = - (2 / sum(x)) * sum(x * (y - y_hat))
db = - (2 / sum(x)) * sum(y - y_hat)
self.w -= (self.lr * dw)
self.b -= (self.lr * db)
def fit(self, x, y):
"""
Method that handles the whole training procedure.
"""
for ep in range(self.epochs):
self.w_history.append(self.w)
self.b_history.append(self.b)
self.update(x, y)
###Output
_____no_output_____
###Markdown
Let's see if it works.
###Code
# CODE:
# --------------------------------------------
model = LinearRegression(epochs=15)
model.fit(x, y)
# PLOTTING:
# --------------------------------------------
# Create figure
fig = plt.figure(figsize=(7, 5))
ax = plt.subplot(111)
# Scatter data points
ax.scatter(x, y, c='#1f77b4', label='data points')
# Draw the predictions for the first four epochs
predictions = [model.w_history[i] * x + model.b_history[i] for i in range(len(model.w_history))]
for i in range(len(model.w_history)):
ax.plot(x, predictions[i], c='#ff7f0e', alpha=i/len(predictions))
# Aesthetic parameters
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_ylim([-20, 220])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_title('Linear Regression (first {} epochs)'.format(model.epochs))
###Output
_____no_output_____
###Markdown
Multi-variable Linear RegressionWhat happens if we have more than one input variables? Not much actually changes, apart from the fact that we now have a separate weight for each input variable. Variables are often referred to as **features** in Machine Learning.Suppose we have a dataset of $N$ training examples, each consisting of $M$ features. We could represent the training data as an $N \times M$ array $X$: $$X = \left( \begin{array}{cccc}x_{11} & x_{12} & ... & x_{1M} \\x_{21} & x_{22} & ... & x_{2M} \\... & ... & ... & ... \\x_{N1} & x_{N2} & ... & x_{NM} \end{array} \right)$$Each example ($X_i$) is accompanied by a label ($y_i$), like before:$$y = \left( \begin{array}{c}y_1 \\y_2 \\... \\y_N\end{array} \right)$$Each prediction is essentially a linear combination of all the features for the input example. **Each feature has its own weight**:$$\hat y_i = x_{i1} \cdot w_{1} + x_{i2} \cdot w_{2} + ... + x_{iM} \cdot w_{M} + b$$The whole prediction array would look like this:$$\hat y = X \cdot W + b = \left( \begin{array}{cccc}x_{11} & x_{12} & ... & x_{1M} \\x_{21} & x_{22} & ... & x_{2M} \\... & ... & ... & ... \\x_{N1} & x_{N2} & ... & x_{NM} \end{array} \right) \cdot\left( \begin{array}{cccc}w_1 \\w_2 \\... \\w_M\end{array} \right) + b$$The final $+$ operation is possible through broadcasting. The mathematical equivalent would be if $b$ was a $1 \times N$ array $\left( \begin{array}{cccc} b & b & ... & b \end{array} \right)^T$. Linear Regression DiscussionLinear regression is a very simple algorithm that usually doesn't work well in real world applications. This is because it makes a lot of **assumptions** for the data. Some of these are:- First of all, it assumes a **linear relationship** between the data and their labels. As a result it cannot sufficiently model non-linear problems.- Secondly, in the case of multiple input features, it assumes little to no **multicollinearity** in them. This means that the input features shouldn't be highly correlated with each other.- A third assumption made is that there isn't any **autocorrelation** in the data. Autocorrelation occurs when the labels are not independent from one another (e.g. in time-series each label is dependent on its previous values).- Another assumption on the data is **homoscedasticity**. This means that the variance of the label stays the same through all training examples.These are all very strong assumptions, making Linear Regression ill-suited for many real world applications where some of these assumptions are violated. Thus, we are forced to look for stronger algorithms, capable of modelling more complex problems.Before moving on, it's worth mentioning two extensions to Linear Regression, called [Lasso][1] and [Ridge](https://en.wikipedia.org/wiki/Tikhonov_regularization) regressions. [1]: https://en.wikipedia.org/wiki/Lasso_(statistics) Classification: Logistic RegressionContrary to regression, in classification the labels are a set of **discrete** values.We'll try to solve a problem, where we want to classify *bananas* and *oranges* according to their length. These two are called **classes**. When we have only two classes, we refer to it as a **binary classification** problem.
###Code
# CODE:
# --------------------------------------------
np.random.seed(5)
n = 100 # number of examples
x = np.concatenate([(5 * np.random.random(n) + 3), (6 * np.random.random(n) + 7)]) # training examples
c = (['orange'] * int(len(x)/2)) + (['banana'] * int(len(x)/2)) # class labels
c_enc = np.array([0] * n + [1] * n) # encode the labels to 0 - 1
df = pd.DataFrame({'x': x, 'c': c, 'y': c_enc})
df.sort_values('x', inplace=True)
# PLOTTING:
# --------------------------------------------
# Create a subplot and scatter the data points
ax = plt.subplot(111)
ax.scatter(x, c_enc)
# Set plot limits
ax.set_xlim([-0.3 + x.min(), x.max() + 0.25])
ax.set_ylim([-0.1, 1.4])
# Set custom labels on the y axis
ax.set_yticks([0, 1])
# Set x and y axis labels
ax.set_xlabel('length (cm)')
ax.set_ylabel('y')
# Rest aesthetic parameters
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_title('Binary Classification Task')
###Output
_____no_output_____
###Markdown
Since we know how linear regression works, we'll try to use that to solve our binary classification task. First, let's fit a linear regression on the data.
###Code
# CODE:
# --------------------------------------------
model = LinearRegression(epochs=50, learning_rate=0.01)
model.b = -0.6 # because the code isn't optimal, this is used to ensure convergeance
model.fit(x, c_enc)
preds = model.predict(x)
df['pred_lr'] = model.predict(df[['x']])
# PLOTTING:
# --------------------------------------------
# Draw the data and the linear regression line
ax = plt.subplot(111)
ax.scatter(x, c_enc, label='data points')
ax.plot(x, preds, c='#ff7f0e', label='linear regression')
# Aesthetic parameters
ax.set_xlabel('length (cm)')
ax.set_ylabel('y')
ax.set_yticks([0, 1])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.legend(loc='lower right')
ax.set_title('Linear Regression on binary data')
###Output
_____no_output_____
###Markdown
Now by applying a threshold (let's say at $0.5$, which is the middle of the encoded $y$ values), we could use the value of the regression line to classify the given examples.
###Code
# PLOTTING:
# --------------------------------------------
# Scatter the data poitns, the linear regression line, a horizontal line depicting the threshold value
# and the resulting line of the threshold.
ax = plt.subplot(111)
ax.scatter(df.x, df.y, label='data points')
ax.plot(df.x, df.pred_lr, color='#ff7f0e', alpha=0.3, label='linear regression')
ax.plot([df.x.min(), df.x.max()], [0.5, 0.5], color='0.5', alpha=0.6, label='threshold', linestyle='--')
ax.plot(df.x, np.where(df.pred_lr > 0.5, 1, 0), color='#ff7f0e', lw=2, label='thresholded regression')
# Add a text box above the threshold line
ax.text(4, 0.55, 'threshold = $0.5$', color='0.5')
# Aesthetic parameters
ax.set_xlim([-0.3 + df.x.min(), df.x.max() + 0.25])
ax.set_ylim([-0.1, 1.4])
ax.set_xlabel('length (cm)')
ax.set_ylabel('y')
ax.set_yticks([0, 1])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.legend()
ax.set_title('Applying a threshold to Linear Regression line')
###Output
_____no_output_____ |
1_time_series_arima.ipynb | ###Markdown
Time series forecasting with ARIMAIn this notebook, we demonstrate how to:- prepare time series data for training an ARIMA times series forecasting model- implement a simple ARIMA model to forecast the next HORIZON steps ahead (time *t+1* through *t+HORIZON*) in the time series- evaluate the model The data in this example is taken from the GEFCom2014 forecasting competition1. It consists of 3 years of hourly electricity load and temperature values between 2012 and 2014. The task is to forecast future values of electricity load. In this example, we show how to forecast one time step ahead, using historical load data only.1Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli and Rob J. Hyndman, "Probabilistic energy forecasting: Global Energy Forecasting Competition 2014 and beyond", International Journal of Forecasting, vol.32, no.3, pp 896-913, July-September, 2016.
###Code
import os
import warnings
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
import math
from pandas.tools.plotting import autocorrelation_plot
# from pyramid.arima import auto_arima
from statsmodels.tsa.statespace.sarimax import SARIMAX
from sklearn.preprocessing import MinMaxScaler
from common.utils import load_data, mape
from IPython.display import Image
%matplotlib inline
pd.options.display.float_format = '{:,.2f}'.format
np.set_printoptions(precision=2)
warnings.filterwarnings("ignore") # specify to ignore warning messages
###Output
_____no_output_____
###Markdown
Load the data from csv into a Pandas dataframe
###Code
energy = load_data('./data')[['load']]
energy.head(10)
###Output
_____no_output_____
###Markdown
Plot all available load data (January 2012 to Dec 2014)
###Code
energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
plt.xlabel('timestamp', fontsize=12)
plt.ylabel('load', fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
Create training and testing data setsWe separate our dataset into train and test sets. We train the model on the train set. After the model has finished training, we evaluate the model on the test set. We must ensure that the test set cover a later period in time from the training set, to ensure that the model does not gain from information from future time periods.We will allocate the period 1st September 2014 to 31st October to training set (2 months) and the period 1st November 2014 to 31st December 2014 to the test set (2 months). Since this is daily consumption of energy, there is a strong seasonal pattern, but the consumption is most similar to the consumption in the recent days. Therefore, using a relatively small window of time for training the data should be sufficient.> NOTE: Since function we use to fit ARIMA model uses in-sample validation during feeting, we will omit the validation data from this notebook.
###Code
train_start_dt = '2014-11-01 00:00:00'
test_start_dt = '2014-12-30 00:00:00'
energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
.join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
.plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
plt.xlabel('timestamp', fontsize=12)
plt.ylabel('load', fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
Data preparation Our data preparation for the training set will involve the following steps:1. Filter the original dataset to include only that time period reserved for the training set2. Scale the time series such that the values fall within the interval (0, 1) Create training set containing only the model features
###Code
train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
test = energy.copy()[energy.index >= test_start_dt][['load']]
print('Training data shape: ', train.shape)
print('Test data shape: ', test.shape)
###Output
Training data shape: (1416, 1)
Test data shape: (48, 1)
###Markdown
Scale data to be in range (0, 1). This transformation should be calibrated on the training set only. This is to prevent information from the validation or test sets leaking into the training data.
###Code
scaler = MinMaxScaler()
train['load'] = scaler.fit_transform(train)
train.head(10)
###Output
_____no_output_____
###Markdown
Original vs scaled data:
###Code
energy[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']].rename(columns={'load':'original load'}).plot.hist(bins=100, fontsize=12)
train.rename(columns={'load':'scaled load'}).plot.hist(bins=100, fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
Let's also scale the test data
###Code
test['load'] = scaler.transform(test)
test.head()
###Output
_____no_output_____
###Markdown
Implement ARIMA method An ARIMA, which stands for **A**uto**R**egressive **I**ntegrated **M**oving **A**verage, model can be created using the statsmodels library. In the next section, we perform the following steps:1. Define the model by calling SARIMAX() and passing in the model parameters: p, d, and q parameters, and P, D, and Q parameters.2. The model is prepared on the training data by calling the fit() function.3. Predictions can be made by calling the forecast() function and specifying the number of steps (horizon) which to forecastIn an ARIMA model there are 3 parameters that are used to help model the major aspects of a times series: seasonality, trend, and noise. These parameters are:- **p** is the parameter associated with the auto-regressive aspect of the model, which incorporates past values. - **d** is the parameter associated with the integrated part of the model, which effects the amount of differencing to apply to a time series. - **q** is the parameter associated with the moving average part of the model.If our model has a seasonal component, we use a seasonal ARIMA model (SARIMA). In that case we have another set of parameters: P, D, and Q which describe the same associations as p,d, and q, but correspond with the seasonal components of the model.
###Code
# Specify the number of steps to forecast ahead
HORIZON = 3
print('Forecasting horizon:', HORIZON, 'hours')
###Output
Forecasting horizon: 3 hours
###Markdown
Selecting the best parameters for an Arima model can be challenging - somewhat subjective and time intesive, so we'll leave it as an exercise to the user. We used an **auto_arima()** function and some additional manual selection to find a decent model.>NOTE: For more info on selecting an Arima model, please refer to the an arima notebook in /ReferenceNotebook directory.
###Code
order = (4, 1, 0)
seasonal_order = (1, 1, 0, 24)
model = SARIMAX(endog=train, order=order, seasonal_order=seasonal_order)
results = model.fit()
print(results.summary())
###Output
Statespace Model Results
==========================================================================================
Dep. Variable: load No. Observations: 1416
Model: SARIMAX(4, 1, 0)x(1, 1, 0, 24) Log Likelihood 3477.240
Date: Mon, 08 Oct 2018 AIC -6942.479
Time: 12:56:56 BIC -6911.053
Sample: 11-01-2014 HQIC -6930.728
- 12-29-2014
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 0.8406 0.016 52.084 0.000 0.809 0.872
ar.L2 -0.5230 0.034 -15.384 0.000 -0.590 -0.456
ar.L3 0.1531 0.044 3.461 0.001 0.066 0.240
ar.L4 -0.0785 0.036 -2.178 0.029 -0.149 -0.008
ar.S.L24 -0.2349 0.024 -9.831 0.000 -0.282 -0.188
sigma2 0.0004 8.32e-06 47.353 0.000 0.000 0.000
===================================================================================
Ljung-Box (Q): 90.44 Jarque-Bera (JB): 1460.40
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.84 Skew: 0.14
Prob(H) (two-sided): 0.07 Kurtosis: 8.01
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
###Markdown
Next we display the distribution of residuals. A zero mean in the residuals may indicate that there is no bias in the prediction. Evaluate the model We will perform the so-called **walk forward validation**. In practice, time series models are re-trained each time a new data becomes available. This allows the model to make the best forecast at each time step. Starting at the beginning of the time series, we train the model on the train data set. Then we make a prediction on the next time step. The prediction is then evaluated against the known value. The training set is then expanded to include the known value and the process is repeated. (Note that we keep the training set window fixed, for more efficient training, so every time we add a new observation to the training set, we remove the observation from the beginning of the set.)This process provides a more robust estimation of how the model will perform in practice. However, it comes at the computation cost of creating so many models. This is acceptable if the data is small or if the model is simple, but could be an issue at scale. Walk-forward validation is the gold standard of time series model evaluation and is recommended for your own projects.
###Code
Image('./images/ts_cross_validation.png')
###Output
_____no_output_____
###Markdown
Create a test data point for each HORIZON step.
###Code
test_shifted = test.copy()
for t in range(1, HORIZON):
test_shifted['load+'+str(t)] = test_shifted['load'].shift(-t, freq='H')
test_shifted = test_shifted.dropna(how='any')
test_shifted.head(5)
###Output
_____no_output_____
###Markdown
Make predictions on the test data
###Code
%%time
training_window = 720 # dedicate 30 days (720 hours) for training
train_ts = train['load']
test_ts = test_shifted
history = [x for x in train_ts]
history = history[(-training_window):]
predictions = list()
# let's user simpler model for demonstration
order = (2, 1, 0)
seasonal_order = (1, 1, 0, 24)
for t in range(test_ts.shape[0]):
model = SARIMAX(endog=history, order=order, seasonal_order=seasonal_order)
model_fit = model.fit()
yhat = model_fit.forecast(steps = HORIZON)
predictions.append(yhat)
obs = list(test_ts.iloc[t])
# move the training window
history.append(obs[0])
history.pop(0)
print(test_ts.index[t])
print(t+1, ': predicted =', yhat, 'expected =', obs)
###Output
2014-12-30 00:00:00
1 : predicted = [0.32 0.29 0.28] expected = [0.32945389435989236, 0.2900626678603402, 0.2739480752014323]
2014-12-30 01:00:00
2 : predicted = [0.3 0.29 0.3 ] expected = [0.2900626678603402, 0.2739480752014323, 0.26812891674127126]
2014-12-30 02:00:00
3 : predicted = [0.27 0.28 0.32] expected = [0.2739480752014323, 0.26812891674127126, 0.3025962399283795]
2014-12-30 03:00:00
4 : predicted = [0.28 0.32 0.42] expected = [0.26812891674127126, 0.3025962399283795, 0.40823634735899716]
2014-12-30 04:00:00
5 : predicted = [0.3 0.39 0.54] expected = [0.3025962399283795, 0.40823634735899716, 0.5689346463742166]
2014-12-30 05:00:00
6 : predicted = [0.4 0.56 0.67] expected = [0.40823634735899716, 0.5689346463742166, 0.6799462846911368]
2014-12-30 06:00:00
7 : predicted = [0.57 0.68 0.75] expected = [0.5689346463742166, 0.6799462846911368, 0.7309758281110115]
2014-12-30 07:00:00
8 : predicted = [0.68 0.75 0.8 ] expected = [0.6799462846911368, 0.7309758281110115, 0.7511190689346463]
2014-12-30 08:00:00
9 : predicted = [0.75 0.8 0.82] expected = [0.7309758281110115, 0.7511190689346463, 0.7636526410026856]
2014-12-30 09:00:00
10 : predicted = [0.76 0.78 0.78] expected = [0.7511190689346463, 0.7636526410026856, 0.7381378692927483]
2014-12-30 10:00:00
11 : predicted = [0.76 0.75 0.74] expected = [0.7636526410026856, 0.7381378692927483, 0.7188898836168307]
2014-12-30 11:00:00
12 : predicted = [0.77 0.76 0.75] expected = [0.7381378692927483, 0.7188898836168307, 0.7090420769919425]
2014-12-30 12:00:00
13 : predicted = [0.7 0.68 0.69] expected = [0.7188898836168307, 0.7090420769919425, 0.7081468218442255]
2014-12-30 13:00:00
14 : predicted = [0.72 0.73 0.76] expected = [0.7090420769919425, 0.7081468218442255, 0.7385854968666068]
2014-12-30 14:00:00
15 : predicted = [0.71 0.73 0.86] expected = [0.7081468218442255, 0.7385854968666068, 0.8478066248880931]
2014-12-30 15:00:00
16 : predicted = [0.73 0.85 0.97] expected = [0.7385854968666068, 0.8478066248880931, 0.9516562220232765]
2014-12-30 16:00:00
17 : predicted = [0.87 0.99 0.97] expected = [0.8478066248880931, 0.9516562220232765, 0.934198746642793]
2014-12-30 17:00:00
18 : predicted = [0.94 0.92 0.86] expected = [0.9516562220232765, 0.934198746642793, 0.8876454789615038]
2014-12-30 18:00:00
19 : predicted = [0.94 0.89 0.82] expected = [0.934198746642793, 0.8876454789615038, 0.8294538943598924]
2014-12-30 19:00:00
20 : predicted = [0.88 0.82 0.71] expected = [0.8876454789615038, 0.8294538943598924, 0.7197851387645477]
2014-12-30 20:00:00
21 : predicted = [0.83 0.72 0.58] expected = [0.8294538943598924, 0.7197851387645477, 0.5747538048343777]
2014-12-30 21:00:00
22 : predicted = [0.72 0.58 0.47] expected = [0.7197851387645477, 0.5747538048343777, 0.4592658907788718]
2014-12-30 22:00:00
23 : predicted = [0.58 0.47 0.39] expected = [0.5747538048343777, 0.4592658907788718, 0.3858549686660697]
2014-12-30 23:00:00
24 : predicted = [0.46 0.38 0.34] expected = [0.4592658907788718, 0.3858549686660697, 0.34377797672336596]
2014-12-31 00:00:00
25 : predicted = [0.38 0.34 0.33] expected = [0.3858549686660697, 0.34377797672336596, 0.32542524619516544]
2014-12-31 01:00:00
26 : predicted = [0.36 0.34 0.34] expected = [0.34377797672336596, 0.32542524619516544, 0.33034914950760963]
2014-12-31 02:00:00
27 : predicted = [0.32 0.32 0.35] expected = [0.32542524619516544, 0.33034914950760963, 0.3706356311548791]
2014-12-31 03:00:00
28 : predicted = [0.32 0.36 0.47] expected = [0.33034914950760963, 0.3706356311548791, 0.470008952551477]
2014-12-31 04:00:00
29 : predicted = [0.37 0.48 0.65] expected = [0.3706356311548791, 0.470008952551477, 0.6145926589077886]
2014-12-31 05:00:00
30 : predicted = [0.48 0.64 0.75] expected = [0.470008952551477, 0.6145926589077886, 0.7247090420769919]
2014-12-31 06:00:00
31 : predicted = [0.63 0.73 0.79] expected = [0.6145926589077886, 0.7247090420769919, 0.786034019695613]
2014-12-31 07:00:00
32 : predicted = [0.71 0.76 0.79] expected = [0.7247090420769919, 0.786034019695613, 0.8012533572068039]
2014-12-31 08:00:00
33 : predicted = [0.78 0.82 0.83] expected = [0.786034019695613, 0.8012533572068039, 0.7994628469113696]
2014-12-31 09:00:00
34 : predicted = [0.82 0.83 0.81] expected = [0.8012533572068039, 0.7994628469113696, 0.780214861235452]
2014-12-31 10:00:00
35 : predicted = [0.8 0.78 0.76] expected = [0.7994628469113696, 0.780214861235452, 0.7587287376902416]
2014-12-31 11:00:00
36 : predicted = [0.77 0.75 0.74] expected = [0.780214861235452, 0.7587287376902416, 0.7367949865711727]
2014-12-31 12:00:00
37 : predicted = [0.77 0.76 0.76] expected = [0.7587287376902416, 0.7367949865711727, 0.7188898836168307]
2014-12-31 13:00:00
38 : predicted = [0.75 0.75 0.78] expected = [0.7367949865711727, 0.7188898836168307, 0.7273948075201431]
2014-12-31 14:00:00
39 : predicted = [0.73 0.75 0.87] expected = [0.7188898836168307, 0.7273948075201431, 0.8299015219337511]
2014-12-31 15:00:00
40 : predicted = [0.74 0.85 0.96] expected = [0.7273948075201431, 0.8299015219337511, 0.909579230080573]
2014-12-31 16:00:00
41 : predicted = [0.83 0.94 0.93] expected = [0.8299015219337511, 0.909579230080573, 0.855863921217547]
2014-12-31 17:00:00
42 : predicted = [0.94 0.93 0.88] expected = [0.909579230080573, 0.855863921217547, 0.7721575649059982]
2014-12-31 18:00:00
43 : predicted = [0.87 0.82 0.77] expected = [0.855863921217547, 0.7721575649059982, 0.7023276633840643]
2014-12-31 19:00:00
44 : predicted = [0.79 0.73 0.63] expected = [0.7721575649059982, 0.7023276633840643, 0.6195165622202325]
2014-12-31 20:00:00
45 : predicted = [0.7 0.59 0.46] expected = [0.7023276633840643, 0.6195165622202325, 0.5425246195165621]
2014-12-31 21:00:00
46 : predicted = [0.6 0.47 0.36] expected = [0.6195165622202325, 0.5425246195165621, 0.4735899731423454]
CPU times: user 26min 45s, sys: 12min 48s, total: 39min 33s
Wall time: 7min 12s
###Markdown
Compare predictions to actual load
###Code
eval_df = pd.DataFrame(predictions, columns=['t+'+str(t) for t in range(1, HORIZON+1)])
eval_df['timestamp'] = test.index[0:len(test.index)-HORIZON+1]
eval_df = pd.melt(eval_df, id_vars='timestamp', value_name='prediction', var_name='h')
eval_df['actual'] = np.array(np.transpose(test_ts)).ravel()
eval_df[['prediction', 'actual']] = scaler.inverse_transform(eval_df[['prediction', 'actual']])
eval_df.head()
###Output
_____no_output_____
###Markdown
Compute the **mean absolute percentage error (MAPE)** over all predictions$$MAPE = \frac{1}{n} \sum_{t=1}^{n}|\frac{actual_t - predicted_t}{actual_t}|$$
###Code
if(HORIZON > 1):
eval_df['APE'] = (eval_df['prediction'] - eval_df['actual']).abs() / eval_df['actual']
print(eval_df.groupby('h')['APE'].mean())
print('One step forecast MAPE: ', (mape(eval_df[eval_df['h'] == 't+1']['prediction'], eval_df[eval_df['h'] == 't+1']['actual']))*100, '%')
print('Multi-step forecast MAPE: ', mape(eval_df['prediction'], eval_df['actual'])*100, '%')
###Output
Multi-step forecast MAPE: 1.1433392660923376 %
###Markdown
Plot the predictions vs the actuals for the first week of the test set
###Code
if(HORIZON == 1):
## Plotting single step forecast
eval_df.plot(x='timestamp', y=['actual', 'prediction'], style=['r', 'b'], figsize=(15, 8))
else:
## Plotting multi step forecast
plot_df = eval_df[(eval_df.h=='t+1')][['timestamp', 'actual']]
for t in range(1, HORIZON+1):
plot_df['t+'+str(t)] = eval_df[(eval_df.h=='t+'+str(t))]['prediction'].values
fig = plt.figure(figsize=(15, 8))
ax = plt.plot(plot_df['timestamp'], plot_df['actual'], color='red', linewidth=4.0)
ax = fig.add_subplot(111)
for t in range(1, HORIZON+1):
x = plot_df['timestamp'][(t-1):]
y = plot_df['t+'+str(t)][0:len(x)]
ax.plot(x, y, color='blue', linewidth=4*math.pow(.9,t), alpha=math.pow(0.8,t))
ax.legend(loc='best')
plt.xlabel('timestamp', fontsize=12)
plt.ylabel('load', fontsize=12)
plt.show()
###Output
_____no_output_____ |
boards/Pynq-Z2/mqttsn/notebooks/04_network_processor.ipynb | ###Markdown
Network IO ProcessorThe Network IO Processor (IOP) enables raw access to the Ethernet interface from within Python.The usage is similar in many ways to sending and receiving Ethernet frames using raw sockets.The advantages of this access include:1. Packets can be sent with low-latency, bypassing the normal Linux kernel stack.2. Access to the network interface is memory-mapped, enabling network-connected accelerators to be prototyped on the ARM cores and then migrated into the Programmable Logic (PL). 1. Downloading overlayNow let's download the overlay and do necessary configurations.
###Code
from pynq_networking import MqttsnOverlay
from site import getsitepackages
import os
mqttsn_bit = os.path.join(getsitepackages()[0], 'pynq_networking',
'overlays', 'mqttsn', 'mqttsn.bit')
overlay = MqttsnOverlay(mqttsn_bit)
overlay.download()
import timeit
import logging
logging.getLogger("kamene.runtime").setLevel(logging.ERROR)
from kamene.all import *
from wurlitzer import sys_pipes
from pynq_networking.lib.network_iop import NetworkIOP
from pynq_networking.lib.slurper import PacketSlurper
from pynq_networking.lib.pynqsocket import L2PynqSocket
conf.L2PynqSocket = L2PynqSocket
###Output
_____no_output_____
###Markdown
3. Bring up interfaces and modulesWe can bring up a network interface for testing. For hardware acceleration, we need to inject the Linux kernel driver.The Python class `LinkManager` is a wrapper for the following commands:```cshchmod 777 ./kernel_module/*.shifconfig br0:1 192.168.3.99ifconfig br0:0 192.168.1.99./kernel_module/link_up.sh```
###Code
from pynq_networking import LinkManager
if_manager = LinkManager()
if_manager.if_up("br0:1", "192.168.3.99")
if_manager.if_up("br0:0", "192.168.1.99")
if_manager.kernel_up()
###Output
_____no_output_____
###Markdown
The kernel module only needs to be run 1 time after the board has been booted.
###Code
mynet = NetworkIOP()
conf.L2PynqSocket().flush()
###Output
156 packets flushed
###Markdown
4. Measuring performanceWe can do a bit of research here. Let's find out how fast we can push out packets first, as shown below.
###Code
import numpy as np
import matplotlib.pyplot as plt
from pynq import PL
from pynq import MMIO
from pynq_networking import *
sizes = [64, 128, 256, 512, 1024, 1500]
count = 500
pps = []
bps = []
usperpacket = []
cyclesperword = []
theoretical = []
mmio = MMIO(0xFFFC0000, 0x10000)
my_ip_str = '192.168.1.104'
my_mac_str = '8a:70:bd:29:2b:40'
for size in sizes:
payload = b''.join([b'0' for _ in range(size)])
frame = Ether(src=my_mac_str, dst='FF:FF:FF:FF:FF:FF')/\
IP(src=my_ip_str, dst="192.168.1.2")/\
UDP(sport=50000, dport=1884)/MQTTSN()/MQTTSN_CONNECT()
frame = bytes(frame) + payload
slurper = conf.L2PynqSocket().slurper
kameneSocket = conf.L2socket()
write32 = slurper.write32
array = slurper.mmio.array
mem = slurper.mmio.mem
leng = len(frame)
start_time = timeit.default_timer()
for _ in range(count):
frame_bytes = bytes(frame)
slurper.send(frame_bytes)
elapsed = timeit.default_timer() - start_time
bps.append(count*len(frame)*8/elapsed)
pps.append(count/elapsed)
usperpacket.append(1000000/(count/elapsed))
cyclesperword.append((100000000*elapsed)/(count*(len(frame)/4)))
theoretical.append(100000000/(len(frame)/4))
plt.title("Delay / microseconds per packet")
plt.plot(sizes, usperpacket, linewidth=2.0)
plt.ylim(ymin=0)
plt.xlabel('Packet size / bytes')
plt.grid(True)
plt.show()
plt.title("Achieved cycles per word")
plt.plot(sizes, cyclesperword, linewidth=2.0)
plt.ylim(ymin=0)
plt.xlabel('Packet size / bytes')
plt.grid(True)
plt.show()
plt.title("Achieved bits per second")
plt.plot(sizes, bps, linewidth=2.0)
plt.xlabel('Packet size / bytes')
plt.grid(True)
plt.show()
plt.title("Packets per second")
plot0, = plt.plot(sizes, theoretical, label='Theoretical',
linewidth=2.0, color='red')
plot1, = plt.plot(sizes, pps, label='Achieved',
linewidth=2.0, color='green')
plt.legend(handles=[plot0, plot1])
plt.ylim(ymin=0, ymax=3500000)
plt.xlabel('Packet size / bytes')
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
5. CleanupWe can remove the kernel module and close the interfaces in the end.
###Code
if_manager.kernel_down()
if_manager.if_down('br0:0')
if_manager.if_down('br0:1')
###Output
_____no_output_____ |
tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TFP Probabilistic Layers: Regression Run in Google Colab View source on GitHub In this example we show how to fit regression models using TFP's "probabilistic layers." Dependencies & Prerequisites
###Code
#@title Install { display-mode: "form" }
TF_Installation = "Nightly" #@param ["Nightly", "Stable", "System"]
if TF_Installation == "Nightly":
!pip install -q tf-nightly
print("Installation of `tf-nightly` complete.")
elif TF_Installation == "Stable":
!pip install -q --upgrade tensorflow
print("Installation of `tensorflow` complete.")
elif TF_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
#@title Install { display-mode: "form" }
TFP_Installation = "Nightly" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
#@title Import { display-mode: "form" }
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
tf.enable_v2_behavior()
%matplotlib inline
tfd = tfp.distributions
###Output
_____no_output_____
###Markdown
Motivation Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,
###Code
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
###Output
_____no_output_____
###Markdown
Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
###Code
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=10):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
###Output
_____no_output_____
###Markdown
Case 1: No Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.05), loss=negloglik)
model.fit(x, y, epochs=500, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='fit', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 2: Aleatoric Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.05), loss=negloglik)
model.fit(x, y, epochs=500, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='fit');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'$\mu+\sigma$');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'$\mu-\sigma$');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 3: Epistemic Uncertainty
###Code
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.05), loss=negloglik)
model.fit(x, y, epochs=500, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='fit', linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='fit', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 4: Aleatoric & Epistemic Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.05), loss=negloglik)
model.fit(x, y, epochs=500, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='fit', linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label=r'$\mu+\sigma$');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label=r'$\mu-\sigma$');
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='fit', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 5: Functional Uncertainty [Experimental]
###Code
#@title Custom PSD Kernel
class KernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(KernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._bias_variance = self.add_variable(
initializer=tf.constant_initializer(.54),
dtype=dtype,
name='bias_variance')
self._slope_variance = self.add_variable(
initializer=tf.constant_initializer(.54),
dtype=dtype,
name='slope_variance')
self._period = self.add_variable(
initializer=tf.constant_initializer(2 * np.pi),
dtype=dtype,
name='period')
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(.54),
dtype=dtype,
name='amplitude')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
linear = tfp.positive_semidefinite_kernels.Linear(
bias_variance=tf.nn.softplus(self._bias_variance),
slope_variance=tf.nn.softplus(self._slope_variance))
periodic = tfp.positive_semidefinite_kernels.ExpSinSquared(
amplitude=tf.nn.softplus(self._amplitude),
period=tf.nn.softplus(self._period))
return linear * periodic
# VGP is data hungry!
y, x, x_tst = load_dataset(n=1000, n_tst=1000)
# Build model.
num_inducing_points = 50
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=[1], dtype=x.dtype),
tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=KernelFn(dtype=x.dtype),
event_shape=[1],
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(*x_range,
num=num_inducing_points,
dtype=x.dtype)[..., np.newaxis])
),
])
# Do inference.
batch_size = 64
loss=lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.1, beta_1=0.5, beta_2=0.9),
loss=loss)
model.fit(x, y, epochs=500, batch_size=batch_size, verbose=False)
# Profit.
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 5: Functional Uncertainty
y, x, _ = load_dataset()
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
num_samples = 7
samples_ = yhat.sample(num_samples).numpy()
plt.plot(np.tile(x_tst, num_samples),
samples_[..., 0].T,
'r',
linewidth=0.9,
label='fit');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Probability Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TFP Probabilistic Layers: Regression View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this example we show how to fit regression models using TFP's "probabilistic layers." Dependencies & Prerequisites
###Code
#@title Import { display-mode: "form" }
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
%matplotlib inline
tfd = tfp.distributions
###Output
_____no_output_____
###Markdown
Make things Fast! Before we dive in, let's make sure we're using a GPU for this demo. To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".The following snippet will verify that we have access to a GPU.
###Code
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
###Output
WARNING: GPU device not found.
###Markdown
Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.) Motivation Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,
###Code
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
###Output
_____no_output_____
###Markdown
Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
###Code
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=150):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
###Output
_____no_output_____
###Markdown
Case 1: No Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 2: Aleatoric Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='mean');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 3: Epistemic Uncertainty
###Code
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 4: Aleatoric & Epistemic Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 5: Functional Uncertainty
###Code
#@title Custom PSD Kernel
class RBFKernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return tfp.math.psd_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(0.1 * self._amplitude),
length_scale=tf.nn.softplus(5. * self._length_scale)
)
# For numeric stability, set the default floating-point dtype to float64
tf.keras.backend.set_floatx('float64')
# Build model.
num_inducing_points = 40
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=[1]),
tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(),
event_shape=[1],
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(*x_range, num=num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.array(0.54).astype(x.dtype))),
),
])
# Do inference.
batch_size = 32
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss)
model.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False)
# Profit.
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 5: Functional Uncertainty
y, x, _ = load_dataset()
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
num_samples = 7
for i in range(num_samples):
sample_ = yhat.sample().numpy()
plt.plot(x_tst,
sample_[..., 0].T,
'r',
linewidth=0.9,
label='ensemble means' if i == 0 else None);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TFP Probabilistic Layers: Regression Run in Google Colab View source on GitHub In this example we show how to fit regression models using TFP's "probabilistic layers." Dependencies & Prerequisites
###Code
#@title Install { display-mode: "form" }
TF_Installation = 'TF2 Nightly (GPU)' #@param ['TF2 Nightly (GPU)', 'TF2 Stable (GPU)', 'TF1 Nightly (GPU)', 'TF1 Stable (GPU)','System']
if TF_Installation == 'TF2 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu-2.0-preview
print('Installation of `tf-nightly-gpu-2.0-preview` complete.')
elif TF_Installation == 'TF2 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu==2.0.0-alpha0
print('Installation of `tensorflow-gpu==2.0.0-alpha0` complete.')
elif TF_Installation == 'TF1 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu
print('Installation of `tf-nightly-gpu` complete.')
elif TF_Installation == 'TF1 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu
print('Installation of `tensorflow-gpu` complete.')
elif TF_Installation == 'System':
pass
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "Nightly" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
#@title Import { display-mode: "form" }
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow as tf
from tensorflow.python import tf2
if not tf2.enabled():
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
assert tf2.enabled()
import tensorflow_probability as tfp
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
%matplotlib inline
tfd = tfp.distributions
###Output
_____no_output_____
###Markdown
Make things Fast! Before we dive in, let's make sure we're using a GPU for this demo. To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".The following snippet will verify that we have access to a GPU.
###Code
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
###Output
_____no_output_____
###Markdown
Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.) Motivation Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,
###Code
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
###Output
_____no_output_____
###Markdown
Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
###Code
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=150):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
###Output
_____no_output_____
###Markdown
Case 1: No Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 2: Aleatoric Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='mean');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 3: Epistemic Uncertainty
###Code
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 4: Aleatoric & Epistemic Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 5: Functional Uncertainty
###Code
#@title Custom PSD Kernel
class RBFKernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return tfp.positive_semidefinite_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(0.1 * self._amplitude),
length_scale=tf.nn.softplus(5. * self._length_scale)
)
# Build model.
num_inducing_points = 40
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=[1], dtype=x.dtype),
tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(dtype=x.dtype),
event_shape=[1],
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(*x_range, num=num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.array(0.54).astype(x.dtype))),
),
])
# Do inference.
batch_size = 32
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss)
model.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False)
# Profit.
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 5: Functional Uncertainty
y, x, _ = load_dataset()
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
num_samples = 7
for i in range(num_samples):
sample_ = yhat.sample().numpy()
plt.plot(x_tst,
sample_[..., 0].T,
'r',
linewidth=0.9,
label='ensemble means' if i == 0 else None);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TFP Probabilistic Layers: Regression Run in Google Colab View source on GitHub In this example we show how to fit regression models using TFP's "probabilistic layers." Dependencies & Prerequisites
###Code
#@title Install { display-mode: "form" }
TF_Installation = 'TF2 Nightly (GPU)' #@param ['TF2 Nightly (GPU)', 'TF2 Stable (GPU)', 'TF1 Nightly (GPU)', 'TF1 Stable (GPU)','System']
if TF_Installation == 'TF2 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu-2.0-preview
print('Installation of `tf-nightly-gpu-2.0-preview` complete.')
elif TF_Installation == 'TF2 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu==2.0.0-alpha0
print('Installation of `tensorflow-gpu==2.0.0-alpha0` complete.')
elif TF_Installation == 'TF1 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu
print('Installation of `tf-nightly-gpu` complete.')
elif TF_Installation == 'TF1 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu
print('Installation of `tensorflow-gpu` complete.')
elif TF_Installation == 'System':
pass
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "Nightly" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
#@title Import { display-mode: "form" }
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow as tf
from tensorflow.python import tf2
if not tf2.enabled():
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
assert tf2.enabled()
import tensorflow_probability as tfp
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
%matplotlib inline
tfd = tfp.distributions
###Output
_____no_output_____
###Markdown
Make things Fast! Before we dive in, let's make sure we're using a GPU for this demo. To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".The following snippet will verify that we have access to a GPU.
###Code
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
###Output
_____no_output_____
###Markdown
Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.) Motivation Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,
###Code
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
###Output
_____no_output_____
###Markdown
Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
###Code
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=150):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
###Output
_____no_output_____
###Markdown
Case 1: No Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 2: Aleatoric Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='mean');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 3: Epistemic Uncertainty
###Code
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 4: Aleatoric & Epistemic Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 5: Functional Uncertainty
###Code
#@title Custom PSD Kernel
class RBFKernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return tfp.positive_semidefinite_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(0.1 * self._amplitude),
length_scale=tf.nn.softplus(5. * self._length_scale)
)
# For numeric stability, set the default floating-point dtype to float64
tf.keras.backend.set_floatx('float64')
# Build model.
num_inducing_points = 40
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=[1]),
tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(),
event_shape=[1],
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(*x_range, num=num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.array(0.54).astype(x.dtype))),
),
])
# Do inference.
batch_size = 32
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss)
model.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False)
# Profit.
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 5: Functional Uncertainty
y, x, _ = load_dataset()
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
num_samples = 7
for i in range(num_samples):
sample_ = yhat.sample().numpy()
plt.plot(x_tst,
sample_[..., 0].T,
'r',
linewidth=0.9,
label='ensemble means' if i == 0 else None);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TFP Probabilistic Layers: Regression Run in Google Colab View source on GitHub In this example we show how to fit regression models using TFP's "probabilistic layers." Dependencies & Prerequisites
###Code
#@title Install { display-mode: "form" }
TF_Installation = 'TF2 Nightly (GPU)' #@param ['TF2 Nightly (GPU)', 'TF2 Stable (GPU)', 'TF1 Nightly (GPU)', 'TF1 Stable (GPU)','System']
if TF_Installation == 'TF2 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu-2.0-preview
print('Installation of `tf-nightly-gpu-2.0-preview` complete.')
elif TF_Installation == 'TF2 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu==2.0.0-alpha0
print('Installation of `tensorflow-gpu==2.0.0-alpha0` complete.')
elif TF_Installation == 'TF1 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu
print('Installation of `tf-nightly-gpu` complete.')
elif TF_Installation == 'TF1 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu
print('Installation of `tensorflow-gpu` complete.')
elif TF_Installation == 'System':
pass
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "Nightly" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
#@title Import { display-mode: "form" }
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow as tf
from tensorflow.python import tf2
if not tf2.enabled():
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
assert tf2.enabled()
import tensorflow_probability as tfp
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
%matplotlib inline
tfd = tfp.distributions
###Output
_____no_output_____
###Markdown
Make things Fast! Before we dive in, let's make sure we're using a GPU for this demo. To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".The following snippet will verify that we have access to a GPU.
###Code
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
###Output
_____no_output_____
###Markdown
Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.) Motivation Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,
###Code
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
###Output
_____no_output_____
###Markdown
Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
###Code
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=150):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
###Output
_____no_output_____
###Markdown
Case 1: No Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 2: Aleatoric Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='mean');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 3: Epistemic Uncertainty
###Code
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 4: Aleatoric & Epistemic Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 5: Functional Uncertainty
###Code
#@title Custom PSD Kernel
class RBFKernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return tfp.positive_semidefinite_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(0.1 * self._amplitude),
length_scale=tf.nn.softplus(5. * self._length_scale)
)
# For numeric stability, set the default floating-point dtype to float64
tf.keras.backend.set_floatx('float64')
# Build model.
num_inducing_points = 40
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=[1]),
tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(),
event_shape=[1],
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(*x_range, num=num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.array(0.54).astype(x.dtype))),
),
])
# Do inference.
batch_size = 32
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss)
model.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False)
# Profit.
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 5: Functional Uncertainty
y, x, _ = load_dataset()
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
num_samples = 7
for i in range(num_samples):
sample_ = yhat.sample().numpy()
plt.plot(x_tst,
sample_[..., 0].T,
'r',
linewidth=0.9,
label='ensemble means' if i == 0 else None);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TFP Probabilistic Layers: Regression Run in Google Colab View source on GitHub In this example we show how to fit regression models using TFP's "probabilistic layers." Dependencies & Prerequisites
###Code
#@title Install { display-mode: "form" }
TF_Installation = 'TF2 Nightly (GPU)' #@param ['TF2 Nightly (GPU)', 'TF2 Stable (GPU)', 'TF1 Nightly (GPU)', 'TF1 Stable (GPU)','System']
if TF_Installation == 'TF2 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu-2.0-preview
print('Installation of `tf-nightly-gpu-2.0-preview` complete.')
elif TF_Installation == 'TF2 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu==2.0.0-alpha0
print('Installation of `tensorflow-gpu==2.0.0-alpha0` complete.')
elif TF_Installation == 'TF1 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu
print('Installation of `tf-nightly-gpu` complete.')
elif TF_Installation == 'TF1 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu
print('Installation of `tensorflow-gpu` complete.')
elif TF_Installation == 'System':
pass
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "Nightly" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
#@title Import { display-mode: "form" }
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
%matplotlib inline
tfd = tfp.distributions
###Output
_____no_output_____
###Markdown
Make things Fast! Before we dive in, let's make sure we're using a GPU for this demo. To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".The following snippet will verify that we have access to a GPU.
###Code
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
###Output
_____no_output_____
###Markdown
Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.) Motivation Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,
###Code
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
###Output
_____no_output_____
###Markdown
Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
###Code
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=150):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
###Output
_____no_output_____
###Markdown
Case 1: No Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 2: Aleatoric Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='mean');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 3: Epistemic Uncertainty
###Code
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 4: Aleatoric & Epistemic Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 5: Functional Uncertainty
###Code
#@title Custom PSD Kernel
class RBFKernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return tfp.math.psd_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(0.1 * self._amplitude),
length_scale=tf.nn.softplus(5. * self._length_scale)
)
# For numeric stability, set the default floating-point dtype to float64
tf.keras.backend.set_floatx('float64')
# Build model.
num_inducing_points = 40
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=[1]),
tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(),
event_shape=[1],
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(*x_range, num=num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.array(0.54).astype(x.dtype))),
),
])
# Do inference.
batch_size = 32
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss)
model.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False)
# Profit.
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 5: Functional Uncertainty
y, x, _ = load_dataset()
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
num_samples = 7
for i in range(num_samples):
sample_ = yhat.sample().numpy()
plt.plot(x_tst,
sample_[..., 0].T,
'r',
linewidth=0.9,
label='ensemble means' if i == 0 else None);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TFP Probabilistic Layers: Regression Run in Google Colab View source on GitHub In this example we show how to fit regression models using TFP's "probabilistic layers." Dependencies & Prerequisites
###Code
#@title Install { display-mode: "form" }
TF_Installation = 'TF2 Nightly (GPU)' #@param ['TF2 Nightly (GPU)', 'TF2 Stable (GPU)', 'TF1 Nightly (GPU)', 'TF1 Stable (GPU)','System']
if TF_Installation == 'TF2 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu-2.0-preview
print('Installation of `tf-nightly-gpu-2.0-preview` complete.')
elif TF_Installation == 'TF2 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu==2.0.0-alpha0
print('Installation of `tensorflow-gpu==2.0.0-alpha0` complete.')
elif TF_Installation == 'TF1 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu
print('Installation of `tf-nightly-gpu` complete.')
elif TF_Installation == 'TF1 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu
print('Installation of `tensorflow-gpu` complete.')
elif TF_Installation == 'System':
pass
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "Nightly" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
#@title Import { display-mode: "form" }
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
%matplotlib inline
tfd = tfp.distributions
###Output
_____no_output_____
###Markdown
Make things Fast! Before we dive in, let's make sure we're using a GPU for this demo. To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".The following snippet will verify that we have access to a GPU.
###Code
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
###Output
_____no_output_____
###Markdown
Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.) Motivation Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,
###Code
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
###Output
_____no_output_____
###Markdown
Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
###Code
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=150):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
###Output
_____no_output_____
###Markdown
Case 1: No Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 2: Aleatoric Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='mean');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 3: Epistemic Uncertainty
###Code
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 4: Aleatoric & Epistemic Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 5: Functional Uncertainty
###Code
#@title Custom PSD Kernel
class RBFKernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return tfp.math.psd_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(0.1 * self._amplitude),
length_scale=tf.nn.softplus(5. * self._length_scale)
)
# For numeric stability, set the default floating-point dtype to float64
tf.keras.backend.set_floatx('float64')
# Build model.
num_inducing_points = 40
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=[1]),
tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(),
event_shape=[1],
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(*x_range, num=num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.array(0.54).astype(x.dtype))),
),
])
# Do inference.
batch_size = 32
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss)
model.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False)
# Profit.
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 5: Functional Uncertainty
y, x, _ = load_dataset()
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
num_samples = 7
for i in range(num_samples):
sample_ = yhat.sample().numpy()
plt.plot(x_tst,
sample_[..., 0].T,
'r',
linewidth=0.9,
label='ensemble means' if i == 0 else None);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TFP Probabilistic Layers: Regression Run in Google Colab View source on GitHub In this example we show how to fit regression models using TFP's "probabilistic layers." Dependencies & Prerequisites
###Code
#@title Install { display-mode: "form" }
TF_Installation = 'TF2 Nightly (GPU)' #@param ['TF2 Nightly (GPU)', 'TF2 Stable (GPU)', 'TF1 Nightly (GPU)', 'TF1 Stable (GPU)','System']
if TF_Installation == 'TF2 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu-2.0-preview
print('Installation of `tf-nightly-gpu-2.0-preview` complete.')
elif TF_Installation == 'TF2 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu==2.0.0-alpha0
print('Installation of `tensorflow-gpu==2.0.0-alpha0` complete.')
elif TF_Installation == 'TF1 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu
print('Installation of `tf-nightly-gpu` complete.')
elif TF_Installation == 'TF1 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu
print('Installation of `tensorflow-gpu` complete.')
elif TF_Installation == 'System':
pass
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "Nightly" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
#@title Import { display-mode: "form" }
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow as tf
from tensorflow.python import tf2
if not tf2.enabled():
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
assert tf2.enabled()
import tensorflow_probability as tfp
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
%matplotlib inline
tfd = tfp.distributions
###Output
_____no_output_____
###Markdown
Make things Fast! Before we dive in, let's make sure we're using a GPU for this demo. To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".The following snippet will verify that we have access to a GPU.
###Code
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
###Output
_____no_output_____
###Markdown
Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.) Motivation Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,
###Code
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
###Output
_____no_output_____
###Markdown
Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
###Code
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=150):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
###Output
_____no_output_____
###Markdown
Case 1: No Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 2: Aleatoric Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='mean');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 3: Epistemic Uncertainty
###Code
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 4: Aleatoric & Epistemic Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 5: Functional Uncertainty
###Code
#@title Custom PSD Kernel
class RBFKernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return tfp.positive_semidefinite_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(0.1 * self._amplitude),
length_scale=tf.nn.softplus(5. * self._length_scale)
)
# Build model.
num_inducing_points = 40
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=[1], dtype=x.dtype),
tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(dtype=x.dtype),
event_shape=[1],
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(*x_range, num=num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.array(0.54).astype(x.dtype))),
),
])
# Do inference.
batch_size = 32
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss)
model.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False)
# Profit.
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 5: Functional Uncertainty
y, x, _ = load_dataset()
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
num_samples = 7
for i in range(num_samples):
sample_ = yhat.sample().numpy()
plt.plot(x_tst,
sample_[..., 0].T,
'r',
linewidth=0.9,
label='ensemble means' if i == 0 else None);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TFP Probabilistic Layers: Regression Run in Google Colab View source on GitHub In this example we show how to fit regression models using TFP's "probabilistic layers." Dependencies & Prerequisites
###Code
#@title Install { display-mode: "form" }
TF_Installation = 'TF2 Nightly (GPU)' #@param ['TF2 Nightly (GPU)', 'TF2 Stable (GPU)', 'TF1 Nightly (GPU)', 'TF1 Stable (GPU)','System']
if TF_Installation == 'TF2 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu-2.0-preview
print('Installation of `tf-nightly-gpu-2.0-preview` complete.')
elif TF_Installation == 'TF2 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu==2.0.0-alpha0
print('Installation of `tensorflow-gpu==2.0.0-alpha0` complete.')
elif TF_Installation == 'TF1 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu
print('Installation of `tf-nightly-gpu` complete.')
elif TF_Installation == 'TF1 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu
print('Installation of `tensorflow-gpu` complete.')
elif TF_Installation == 'System':
pass
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "Nightly" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
#@title Import { display-mode: "form" }
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow as tf
from tensorflow.python import tf2
if not tf2.enabled():
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
assert tf2.enabled()
import tensorflow_probability as tfp
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
%matplotlib inline
tfd = tfp.distributions
###Output
_____no_output_____
###Markdown
Make things Fast! Before we dive in, let's make sure we're using a GPU for this demo. To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".The following snippet will verify that we have access to a GPU.
###Code
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
###Output
_____no_output_____
###Markdown
Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.) Motivation Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,
###Code
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
###Output
_____no_output_____
###Markdown
Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
###Code
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=150):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
###Output
_____no_output_____
###Markdown
Case 1: No Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 2: Aleatoric Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='mean');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 3: Epistemic Uncertainty
###Code
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 4: Aleatoric & Epistemic Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 5: Functional Uncertainty
###Code
#@title Custom PSD Kernel
class RBFKernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return tfp.math.psd_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(0.1 * self._amplitude),
length_scale=tf.nn.softplus(5. * self._length_scale)
)
# For numeric stability, set the default floating-point dtype to float64
tf.keras.backend.set_floatx('float64')
# Build model.
num_inducing_points = 40
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=[1]),
tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(),
event_shape=[1],
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(*x_range, num=num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.array(0.54).astype(x.dtype))),
),
])
# Do inference.
batch_size = 32
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss)
model.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False)
# Profit.
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 5: Functional Uncertainty
y, x, _ = load_dataset()
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
num_samples = 7
for i in range(num_samples):
sample_ = yhat.sample().numpy()
plt.plot(x_tst,
sample_[..., 0].T,
'r',
linewidth=0.9,
label='ensemble means' if i == 0 else None);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Probability Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TFP Probabilistic Layers: Regression View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this example we show how to fit regression models using TFP's "probabilistic layers." Dependencies & Prerequisites
###Code
#@title Import { display-mode: "form" }
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
%matplotlib inline
tfd = tfp.distributions
###Output
_____no_output_____
###Markdown
Make things Fast! Before we dive in, let's make sure we're using a GPU for this demo. To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".The following snippet will verify that we have access to a GPU.
###Code
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
###Output
WARNING: GPU device not found.
###Markdown
Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.) Motivation Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,
###Code
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
###Output
_____no_output_____
###Markdown
Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
###Code
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=150):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
###Output
_____no_output_____
###Markdown
Case 1: No Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 2: Aleatoric Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='mean');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 3: Epistemic Uncertainty
###Code
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 4: Aleatoric & Epistemic Uncertainty
###Code
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Case 5: Functional Uncertainty
###Code
#@title Custom PSD Kernel
class RBFKernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return tfp.math.psd_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(0.1 * self._amplitude),
length_scale=tf.nn.softplus(5. * self._length_scale)
)
# For numeric stability, set the default floating-point dtype to float64
tf.keras.backend.set_floatx('float64')
# Build model.
num_inducing_points = 40
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=[1]),
tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(),
event_shape=[1],
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(*x_range, num=num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.array(0.54).astype(x.dtype))),
),
])
# Do inference.
batch_size = 32
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss)
model.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False)
# Profit.
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 5: Functional Uncertainty
y, x, _ = load_dataset()
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
num_samples = 7
for i in range(num_samples):
sample_ = yhat.sample().numpy()
plt.plot(x_tst,
sample_[..., 0].T,
'r',
linewidth=0.9,
label='ensemble means' if i == 0 else None);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____ |
Python_Stock/Technical_Indicators/DEMA.ipynb | ###Markdown
Double Exponential Moving Average (DEMA) https://www.investopedia.com/terms/d/double-exponential-moving-average.asp
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# yfinance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2018-08-01'
end = '2019-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
import talib as ta
df['EMA'] = ta.EMA(df['Adj Close'], timeperiod=5)
df['EMA_S'] = ta.EMA(df['EMA'], timeperiod=5)
df['DEMA'] = (2*df['EMA']) - df['EMA_S']
df.head(15)
# Line Chart
fig = plt.figure(figsize=(16,8))
ax1 = plt.subplot(111)
ax1.plot(df.index, df['Adj Close'])
ax1.plot(df.index, df['DEMA'])
ax1.axhline(y=df['Adj Close'].mean(),color='r')
ax1.grid()
#ax1.grid(True, which='both')
#ax1.grid(which='minor', linestyle='-', linewidth='0.5', color='black')
#ax1.grid(which='major', linestyle='-', linewidth='0.5', color='red')
#ax1.minorticks_on()
ax1.legend(loc='best')
ax1v = ax1.twinx()
ax1v.fill_between(df.index[0:],0, df.Volume[0:], facecolor='#0079a3', alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
###Output
_____no_output_____
###Markdown
Candlestick with DEMA
###Code
from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date))
dfc.head()
from mpl_finance import candlestick_ohlc
fig = plt.figure(figsize=(16,8))
ax1 = plt.subplot(111)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.plot(df.index, df['DEMA'])
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax1.grid(True, which='both')
ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
###Output
_____no_output_____
###Markdown
Double Exponential Moving Average (DEMA) https://www.investopedia.com/terms/d/double-exponential-moving-average.asp
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import fix_yahoo_finance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2018-08-01'
end = '2019-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
import talib as ta
df['EMA'] = ta.EMA(df['Adj Close'], timeperiod=5)
df['EMA_S'] = ta.EMA(df['EMA'], timeperiod=5)
df['DEMA'] = (2*df['EMA']) - df['EMA_S']
df.head(15)
# Line Chart
fig = plt.figure(figsize=(16,8))
ax1 = plt.subplot(111)
ax1.plot(df.index, df['Adj Close'])
ax1.plot(df.index, df['DEMA'])
ax1.axhline(y=df['Adj Close'].mean(),color='r')
ax1.grid()
#ax1.grid(True, which='both')
#ax1.grid(which='minor', linestyle='-', linewidth='0.5', color='black')
#ax1.grid(which='major', linestyle='-', linewidth='0.5', color='red')
#ax1.minorticks_on()
ax1.legend(loc='best')
ax1v = ax1.twinx()
ax1v.fill_between(df.index[0:],0, df.Volume[0:], facecolor='#0079a3', alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
###Output
_____no_output_____
###Markdown
Candlestick with DEMA
###Code
from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date))
dfc.head()
from mpl_finance import candlestick_ohlc
fig = plt.figure(figsize=(16,8))
ax1 = plt.subplot(111)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.plot(df.index, df['DEMA'])
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax1.grid(True, which='both')
ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
###Output
_____no_output_____ |
course4/week3-ungraded-labs/C4_W3_Lab_1_Intro_to_KFP/C4_W3_Lab_1_Kubeflow_Pipelines.ipynb | ###Markdown
Ungraded Lab: Building ML Pipelines with Kubeflow In this lab, you will have some hands-on practice with [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/pipelines-overview/). As mentioned in the lectures, modern ML engineering is moving towards pipeline automation for rapid iteration and experiment tracking. This is especially useful in production deployments where models need to be frequently retrained to catch trends in newer data.Kubeflow Pipelines is one component of the [Kubeflow](https://www.kubeflow.org/) suite of tools for machine learning workflows. It is deployed on top of a Kubernetes cluster and builds an infrastructure for orchestrating ML pipelines and monitoring inputs and outputs of each component. You will use this tool in Google Cloud Platform in the first assignment this week and this lab will help prepare you for that by exploring its features on a local deployment. In particular, you will:* setup [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/pipelines-overview/) in your local workstation* get familiar with the Kubeflow Pipelines UI* build pipeline components with Python and the Kubeflow Pipelines SDK* run an ML pipeline with Kubeflow PipelinesLet's begin! SetupYou will need these tool installed in your local machine to complete the exercises:1. Docker - platform for building and running containerized applications. You should already have this installed from the previous ungraded labs. If not, you can see the instructions [here](https://docs.docker.com/get-docker/). If you are using Docker for Desktop (Mac or Windows), you may need to increase the resource limits to start Kubeflow Pipelines later. You can click on the Docker icon in your Task Bar, choose `Preferences` and adjust the CPU to 4, Storage to 50GB, and the memory to at least 4GB (8GB recommended). Just make sure you are not maxing out any of these limits (i.e. the slider should ideally be at the midpoint or less) since it can make your machine slow or unresponsive. If you're constrained on resources, don't worry. You can still use this notebook as reference since we'll show the expected outputs at each step. The important thing is to become familiar with this Kubeflow Pipelines before you get more hands-on in the assignment. 2. kubectl - tool for running commands on Kubernetes clusters. This should also be installed from the previous labs. If not, please see the instructions [here](https://kubernetes.io/docs/tasks/tools/)3. [kind](https://kind.sigs.k8s.io/) - a Kubernetes distribution for running local clusters using Docker. Please follow the instructions [here](https://www.kubeflow.org/docs/components/pipelines/installation/localcluster-deployment/kind) to install kind and create a local cluster.4. Kubeflow Pipelines - a platform for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers. Once you've created a local cluster using kind, you can deploy Kubeflow Pipelines with these commands.```export PIPELINE_VERSION=1.7.0kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.iokubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/platform-agnostic-pns?ref=$PIPELINE_VERSION"```You can enter the commands above one line at a time. These will setup all the deployments and spin up the pods for the entire application. These will be found in the `kubeflow` namespace. After sending the last command, it will take a moment (around 30 minutes) for all the deployments to be ready. You can send the command `kubectl get deploy -n kubeflow` a few times to check the status. You should see all deployments with the `READY` status before you can proceed to the next section.```NAME READY UP-TO-DATE AVAILABLE AGEcache-deployer-deployment 1/1 1 1 21hcache-server 1/1 1 1 21hmetadata-envoy-deployment 1/1 1 1 21hmetadata-grpc-deployment 1/1 1 1 21hmetadata-writer 1/1 1 1 21hminio 1/1 1 1 21hml-pipeline 1/1 1 1 21hml-pipeline-persistenceagent 1/1 1 1 21hml-pipeline-scheduledworkflow 1/1 1 1 21hml-pipeline-ui 1/1 1 1 21hml-pipeline-viewer-crd 1/1 1 1 21hml-pipeline-visualizationserver 1/1 1 1 21hmysql 1/1 1 1 21hworkflow-controller 1/1 1 1 21h```When everything is ready, you can run the following command to access the `ml-pipeline-ui` service.```kubectl port-forward -n kubeflow svc/ml-pipeline-ui 8080:80```The terminal should respond with something like this:```Forwarding from 127.0.0.1:8080 -> 3000Forwarding from [::1]:8080 -> 3000```You can then open your browser and go to `http://localhost:8080` to see the user interface. Operationalizing your ML PipelinesAs you know, generating a trained model involves executing a sequence of steps. Here is a high level overview of what these steps might look like:You can recall the very first model you ever built and more likely than not, your code then also followed a similar flow. In essence, building an ML pipeline mainly involves implementing these steps but you will need to optimize your operations to deliver value to your team. Platforms such as Kubeflow helps you to build ML pipelines that can be automated, reproducible, and easily monitored. You will see these as you build your pipeline in the next sections below. Pipeline componentsThe main building blocks of your ML pipeline are referred to as [components](https://www.kubeflow.org/docs/components/pipelines/overview/concepts/component/). In the context of Kubeflow, these are containerized applications that run a specific task in the pipeline. Moreover, these components generate and consume *artifacts* from other components. For example, a download task will generate a dataset artifact and this will be consumed by a data splitting task. If you go back to the simple pipeline image above and describe it using tasks and artifacts, it will look something like this:This relationship between tasks and their artifacts are what constitutes a pipeline and is also called a [directed acyclic graph (DAG)](https://en.wikipedia.org/wiki/Directed_acyclic_graph).Kubeflow Pipelines let's you create components either by [building the component specification directly](https://www.kubeflow.org/docs/components/pipelines/sdk/component-development/component-spec) or through [Python functions](https://www.kubeflow.org/docs/components/pipelines/sdk/python-function-components/). For this lab, you will use the latter since it is more intuitive and allows for quick iteration. As you gain more experience, you can explore building the component specification directly especially if you want to use different languages other than Python.You will begin by installing the Kubeflow Pipelines SDK. Remember to restart the runtime to load the newly installed modules in Colab.
###Code
# Install the KFP SDK
!pip install --upgrade kfp
###Output
_____no_output_____
###Markdown
**Note:** *Please do not proceed to the next steps without restarting the Runtime after installing `kfp`. You can do that by either pressing the `Restart Runtime` button at the end of the cell output above, or going to the `Runtime` button at the Colab toolbar above and selecting `Restart Runtime`.* Now you will import the modules you will be using to construct the Kubeflow pipeline. You will know more what these are for in the next sections.
###Code
# Import the modules you will use
import kfp
# For creating the pipeline
from kfp.v2 import dsl
# For building components
from kfp.v2.dsl import component
# Type annotations for the component artifacts
from kfp.v2.dsl import (
Input,
Output,
Artifact,
Dataset,
Model,
Metrics
)
###Output
_____no_output_____
###Markdown
In this lab, you will build a pipeline to train a multi-output model trained on the [Energy Effeciency dataset from the UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Energy+efficiency). It uses the bulding features (e.g. wall area, roof area) as inputs and has two outputs: Cooling Load and Heating Load. You will follow the five-task graph above with some slight differences in the generated artifacts.You will now build the component to load your data into the pipeline. The code is shown below and we will discuss the syntax in more detail after running it.
###Code
@component(
packages_to_install=["pandas", "openpyxl"],
output_component_file="download_data_component.yaml"
)
def download_data(url:str, output_csv:Output[Dataset]):
import pandas as pd
# Use pandas excel reader
df = pd.read_excel(url)
df = df.sample(frac=1).reset_index(drop=True)
df.to_csv(output_csv.path, index=False)
###Output
_____no_output_____
###Markdown
When building a component, it's good to determine first its inputs and outputs.* The dataset you want to download is an Excel file hosted by UCI [here](https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx) and you can load that using Pandas. Instead of hardcoding the URL in your code, you can design your function to accept an *input* string parameter so you can use other URLs in case the data has been transferred. * For the *output*, you will want to pass the downloaded dataset to the next task (i.e. data splitting). You should assign this as an `Output` type and specify what kind of artifact it is. Kubeflow provides [several of these](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.py) such as `Dataset`, `Model`, `Metrics`, etc. All artifacts are saved by Kubeflow to a storage server. For local deployments, the default will be a [MinIO](https://min.io/) server. The [path](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.pyL51) property fetches the location where this artifact will be saved and that's what you did above when you called `df.to_csv(output_csv.path, index=False)`The inputs and outputs are declared as parameters in the function definition. As you can see in the code we defined a `url` parameter with a `str` type and an `output_csv` parameter with an `Output[Dataset]` type.Lastly, you'll need to use the `component` decorator to specify that this is a Kubeflow Pipeline component. The [documentation](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/component_decorator.pyL23) shows several parameters you can set and two of them are used in the code above. As the name suggests, the `packages_to_install` argument declares any extra packages outside the base image that is needed to run your code. As of writing, the default base image is `python:3.7` so you'll need `pandas` and `openpyxl` to load the Excel file. The `output_component_file` is an output file that contains the specification for your newly built component. You should see it in the Colab file explorer once you've ran the cell above. You'll see your code there and other settings that pertain to your component. You can use this file when building other pipelines if necessary. You don't have to redo your code again in a notebook in your next project as long as you have this YAML file. You can also pass this to your team members or use it in another machine. Kubeflow also hosts other reusable modules in their repo [here](https://github.com/kubeflow/pipelines/tree/master/components). For example, if you want a file downloader component in one of your projects, you can load the component from that repo using the [load_component_from_url](https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.htmlkfp.components.ComponentStore.load_component_from_url) function as shown below. The [YAML file](https://raw.githubusercontent.com/kubeflow/pipelines/master/components/web/Download/component-sdk-v2.yaml) of that component should tell you the inputs and outputs so you can use it accordingly.```web_downloader_op = kfp.components.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/master/components/web/Download/component-sdk-v2.yaml')``` Next, you will build the next component in the pipeline. Like in the previous step, you should design it first with inputs and outputs in mind. You know that the input of this component will come from the artifact generated by the `download_data()` function above. To declare input artifacts, you can annotate your parameter with the `Input[Dataset]` data type as shown below. For the outputs, you want to have two: train and test datasets. You can see the implementation below:
###Code
@component(
packages_to_install=["pandas", "sklearn"],
output_component_file="split_data_component.yaml"
)
def split_data(input_csv: Input[Dataset], train_csv: Output[Dataset], test_csv: Output[Dataset]):
import pandas as pd
from sklearn.model_selection import train_test_split
df = pd.read_csv(input_csv.path)
train, test = train_test_split(df, test_size=0.2)
train.to_csv(train_csv.path, index=False)
test.to_csv(test_csv.path, index=False)
###Output
_____no_output_____
###Markdown
Building and Running a Pipeline Now that you have at least two components, you can try building a pipeline just to quickly see how it works. The code is shown below. Basically, you just define a function with the sequence of steps then use the `dsl.pipeline` decorator. Notice in the last line (i.e. `split_data_task`) that to get a particular artifact from a previous step, you will need to use the `outputs` dictionary and use the parameter name as the key.
###Code
@dsl.pipeline(
name="my-pipeline",
)
def my_pipeline(url: str):
download_data_task = download_data(url=url)
split_data_task = split_data(input_csv=download_data_task.outputs['output_csv'])
###Output
_____no_output_____
###Markdown
To generate your pipeline specification file, you need to compile your pipeline function using the [`Compiler`](https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.compiler.htmlkfp.compiler.Compiler) class as shown below.
###Code
kfp.compiler.Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(
pipeline_func=my_pipeline,
package_path='pipeline.yaml')
###Output
_____no_output_____
###Markdown
After running the cell, you'll see a `pipeline.yaml` file in the Colab file explorer. Please download that because it will be needed in the next step.You can run a pipeline programmatically or from the UI. For this exercise, you will do it from the UI and you will see how it is done programmatically in the Qwiklabs later this week. Please go back to the Kubeflow Pipelines UI and click `Upload Pipelines` from the `Pipelines` page.Next, select `Upload a file` and choose the `pipeline.yaml` you downloaded earlier then click `Create`. This will open a screen showing your simple DAG (just two tasks). Click `Create Run` then scroll to the bottom to input the URL of the Excel file: https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx . Then Click `Start`.Select the topmost entry in the `Runs` page and you should see the progress of your run. You can click on the `download-data` box to see more details about that particular task (i.e. the URL input and the container logs). After it turns green, you should also see the output artifact and you can download it if you want by clicking the minio link. Eventually, both tasks will turn green indicating that the run completed successfully. Nicely done! Generate the rest of the components Now that you've seen a sample workflow, you can build the rest of the components for preprocessing, model training, and model evaluation. The functions will be longer because the task is more complex. Nonetheless, it follows the same principles as before such as declaring inputs and outputs, and specifying the additional packages.In the `eval_model()` function, you'll notice the use of the [`log_metric()`](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.pyL123) to record the results. You'll see this in the `Visualizations` tab of that task after it has completed.
###Code
@component(
packages_to_install=["pandas", "numpy"],
output_component_file="preprocess_data_component.yaml"
)
def preprocess_data(input_train_csv: Input[Dataset], input_test_csv: Input[Dataset],
output_train_x: Output[Dataset], output_test_x: Output[Dataset],
output_train_y: Output[Artifact], output_test_y: Output[Artifact]):
import pandas as pd
import numpy as np
import pickle
def format_output(data):
y1 = data.pop('Y1')
y1 = np.array(y1)
y2 = data.pop('Y2')
y2 = np.array(y2)
return y1, y2
def norm(x, train_stats):
return (x - train_stats['mean']) / train_stats['std']
train = pd.read_csv(input_train_csv.path)
test = pd.read_csv(input_test_csv.path)
train_stats = train.describe()
# Get Y1 and Y2 as the 2 outputs and format them as np arrays
train_stats.pop('Y1')
train_stats.pop('Y2')
train_stats = train_stats.transpose()
train_Y = format_output(train)
with open(output_train_y.path, "wb") as file:
pickle.dump(train_Y, file)
test_Y = format_output(test)
with open(output_test_y.path, "wb") as file:
pickle.dump(test_Y, file)
# Normalize the training and test data
norm_train_X = norm(train, train_stats)
norm_test_X = norm(test, train_stats)
norm_train_X.to_csv(output_train_x.path, index=False)
norm_test_X.to_csv(output_test_x.path, index=False)
@component(
packages_to_install=["tensorflow", "pandas"],
output_component_file="train_model_component.yaml"
)
def train_model(input_train_x: Input[Dataset], input_train_y: Input[Artifact],
output_model: Output[Model], output_history: Output[Artifact]):
import pandas as pd
import tensorflow as tf
import pickle
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input
norm_train_X = pd.read_csv(input_train_x.path)
with open(input_train_y.path, "rb") as file:
train_Y = pickle.load(file)
def model_builder(train_X):
# Define model layers.
input_layer = Input(shape=(len(train_X.columns),))
first_dense = Dense(units='128', activation='relu')(input_layer)
second_dense = Dense(units='128', activation='relu')(first_dense)
# Y1 output will be fed directly from the second dense
y1_output = Dense(units='1', name='y1_output')(second_dense)
third_dense = Dense(units='64', activation='relu')(second_dense)
# Y2 output will come via the third dense
y2_output = Dense(units='1', name='y2_output')(third_dense)
# Define the model with the input layer and a list of output layers
model = Model(inputs=input_layer, outputs=[y1_output, y2_output])
print(model.summary())
return model
model = model_builder(norm_train_X)
# Specify the optimizer, and compile the model with loss functions for both outputs
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
model.compile(optimizer=optimizer,
loss={'y1_output': 'mse', 'y2_output': 'mse'},
metrics={'y1_output': tf.keras.metrics.RootMeanSquaredError(),
'y2_output': tf.keras.metrics.RootMeanSquaredError()})
# Train the model for 500 epochs
history = model.fit(norm_train_X, train_Y, epochs=100, batch_size=10)
model.save(output_model.path)
with open(output_history.path, "wb") as file:
train_Y = pickle.dump(history.history, file)
@component(
packages_to_install=["tensorflow", "pandas"],
output_component_file="eval_model_component.yaml"
)
def eval_model(input_model: Input[Model], input_history: Input[Artifact],
input_test_x: Input[Dataset], input_test_y: Input[Artifact],
MLPipeline_Metrics: Output[Metrics]):
import pandas as pd
import tensorflow as tf
import pickle
model = tf.keras.models.load_model(input_model.path)
norm_test_X = pd.read_csv(input_test_x.path)
with open(input_test_y.path, "rb") as file:
test_Y = pickle.load(file)
# Test the model and print loss and mse for both outputs
loss, Y1_loss, Y2_loss, Y1_rmse, Y2_rmse = model.evaluate(x=norm_test_X, y=test_Y)
print("Loss = {}, Y1_loss = {}, Y1_mse = {}, Y2_loss = {}, Y2_mse = {}".format(loss, Y1_loss, Y1_rmse, Y2_loss, Y2_rmse))
MLPipeline_Metrics.log_metric("loss", loss)
MLPipeline_Metrics.log_metric("Y1_loss", Y1_loss)
MLPipeline_Metrics.log_metric("Y2_loss", Y2_loss)
MLPipeline_Metrics.log_metric("Y1_rmse", Y1_rmse)
MLPipeline_Metrics.log_metric("Y2_rmse", Y2_rmse)
###Output
_____no_output_____
###Markdown
Build and run the complete pipeline You can then build and run the entire pipeline as you did earlier. It will take around 20 minutes for all the tasks to complete and you can see the `Logs` tab of each task to see how it's going. For instance, you can see there the model training epochs as you normally see in a notebook environment.
###Code
# Define a pipeline and create a task from a component:
@dsl.pipeline(
name="my-pipeline",
)
def my_pipeline(url: str):
download_data_task = download_data(url=url)
split_data_task = split_data(input_csv=download_data_task.outputs['output_csv'])
preprocess_data_task = preprocess_data(input_train_csv=split_data_task.outputs['train_csv'],
input_test_csv=split_data_task.outputs['test_csv'])
train_model_task = train_model(input_train_x=preprocess_data_task.outputs["output_train_x"],
input_train_y=preprocess_data_task.outputs["output_train_y"])
eval_model_task = eval_model(input_model=train_model_task.outputs["output_model"],
input_history=train_model_task.outputs["output_history"],
input_test_x=preprocess_data_task.outputs["output_test_x"],
input_test_y=preprocess_data_task.outputs["output_test_y"])
kfp.compiler.Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(
pipeline_func=my_pipeline,
package_path='pipeline.yaml')
###Output
_____no_output_____
###Markdown
Ungraded Lab: Building ML Pipelines with Kubeflow In this lab, you will have some hands-on practice with [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/pipelines-overview/). As mentioned in the lectures, modern ML engineering is moving towards pipeline automation for rapid iteration and experiment tracking. This is especially useful in production deployments where models need to be frequently retrained to catch trends in newer data.Kubeflow Pipelines is one component of the [Kubeflow](https://www.kubeflow.org/) suite of tools for machine learning workflows. It is deployed on top of a Kubernetes cluster and builds an infrastructure for orchestrating ML pipelines and monitoring inputs and outputs of each component. You will use this tool in Google Cloud Platform in the first assignment this week and this lab will help prepare you for that by exploring its features on a local deployment. In particular, you will:* setup [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/pipelines-overview/) in your local workstation* get familiar with the Kubeflow Pipelines UI* build pipeline components with Python and the Kubeflow Pipelines SDK* run an ML pipeline with Kubeflow PipelinesLet's begin! SetupYou will need these tool installed in your local machine to complete the exercises:1. Docker - platform for building and running containerized applications. You should already have this installed from the previous ungraded labs. If not, you can see the instructions [here](https://docs.docker.com/get-docker/). If you are using Docker for Desktop (Mac or Windows), you may need to increase the resource limits to start Kubeflow Pipelines later. You can click on the Docker icon in your Task Bar, choose `Preferences` and adjust the CPU to 4, Storage to 50GB, and the memory to at least 4GB (8GB recommended). Just make sure you are not maxing out any of these limits (i.e. the slider should ideally be at the midpoint or less) since it can make your machine slow or unresponsive. If you're constrained on resources, don't worry. You can still use this notebook as reference since we'll show the expected outputs at each step. The important thing is to become familiar with this Kubeflow Pipelines before you get more hands-on in the assignment. 2. kubectl - tool for running commands on Kubernetes clusters. This should also be installed from the previous labs. If not, please see the instructions [here](https://kubernetes.io/docs/tasks/tools/)3. [kind](https://kind.sigs.k8s.io/) - a Kubernetes distribution for running local clusters using Docker. Please follow the instructions [here](https://www.kubeflow.org/docs/components/pipelines/installation/localcluster-deployment/kind) to install kind and create a local cluster.4. Kubeflow Pipelines - a platform for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers. Once you've created a local cluster using kind, you can deploy Kubeflow Pipelines with these commands.```export PIPELINE_VERSION=1.7.0kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.iokubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/platform-agnostic-pns?ref=$PIPELINE_VERSION"```You can enter the commands above one line at a time. These will setup all the deployments and spin up the pods for the entire application. These will be found in the `kubeflow` namespace. After sending the last command, it will take a moment (around 30 minutes) for all the deployments to be ready. You can send the command `kubectl get deploy -n kubeflow` a few times to check the status. You should see all deployments with the `READY` status before you can proceed to the next section.```NAME READY UP-TO-DATE AVAILABLE AGEcache-deployer-deployment 1/1 1 1 21hcache-server 1/1 1 1 21hmetadata-envoy-deployment 1/1 1 1 21hmetadata-grpc-deployment 1/1 1 1 21hmetadata-writer 1/1 1 1 21hminio 1/1 1 1 21hml-pipeline 1/1 1 1 21hml-pipeline-persistenceagent 1/1 1 1 21hml-pipeline-scheduledworkflow 1/1 1 1 21hml-pipeline-ui 1/1 1 1 21hml-pipeline-viewer-crd 1/1 1 1 21hml-pipeline-visualizationserver 1/1 1 1 21hmysql 1/1 1 1 21hworkflow-controller 1/1 1 1 21h```When everything is ready, you can run the following command to access the `ml-pipeline-ui` service.```kubectl port-forward -n kubeflow svc/ml-pipeline-ui 8080:80```The terminal should respond with something like this:```Forwarding from 127.0.0.1:8080 -> 3000Forwarding from [::1]:8080 -> 3000```You can then open your browser and go to `http://localhost:8080` to see the user interface. Operationalizing your ML PipelinesAs you know, generating a trained model involves executing a sequence of steps. Here is a high level overview of what these steps might look like:You can recall the very first model you ever built and more likely than not, your code then also followed a similar flow. In essence, building an ML pipeline mainly involves implementing these steps but you will need to optimize your operations to deliver value to your team. Platforms such as Kubeflow helps you to build ML pipelines that can be automated, reproducible, and easily monitored. You will see these as you build your pipeline in the next sections below. Pipeline componentsThe main building blocks of your ML pipeline are referred to as [components](https://www.kubeflow.org/docs/components/pipelines/overview/concepts/component/). In the context of Kubeflow, these are containerized applications that run a specific task in the pipeline. Moreover, these components generate and consume *artifacts* from other components. For example, a download task will generate a dataset artifact and this will be consumed by a data splitting task. If you go back to the simple pipeline image above and describe it using tasks and artifacts, it will look something like this:This relationship between tasks and their artifacts are what constitutes a pipeline and is also called a [directed acyclic graph (DAG)](https://en.wikipedia.org/wiki/Directed_acyclic_graph).Kubeflow Pipelines let's you create components either by [building the component specification directly](https://www.kubeflow.org/docs/components/pipelines/sdk/component-development/component-spec) or through [Python functions](https://www.kubeflow.org/docs/components/pipelines/sdk/python-function-components/). For this lab, you will use the latter since it is more intuitive and allows for quick iteration. As you gain more experience, you can explore building the component specification directly especially if you want to use different languages other than Python.You will begin by installing the Kubeflow Pipelines SDK. Remember to restart the runtime to load the newly installed modules in Colab.
###Code
# Install the KFP SDK
!pip install --upgrade kfp
###Output
_____no_output_____
###Markdown
**Note:** *Please do not proceed to the next steps without restarting the Runtime after installing `kfp`. You can do that by either pressing the `Restart Runtime` button at the end of the cell output above, or going to the `Runtime` button at the Colab toolbar above and selecting `Restart Runtime`.* Now you will import the modules you will be using to construct the Kubeflow pipeline. You will know more what these are for in the next sections.
###Code
# Import the modules you will use
import kfp
# For creating the pipeline
from kfp.v2 import dsl
# For building components
from kfp.v2.dsl import component
# Type annotations for the component artifacts
from kfp.v2.dsl import (
Input,
Output,
Artifact,
Dataset,
Model,
Metrics
)
###Output
_____no_output_____
###Markdown
In this lab, you will build a pipeline to train a multi-output model trained on the [Energy Effeciency dataset from the UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Energy+efficiency). It uses the bulding features (e.g. wall area, roof area) as inputs and has two outputs: Cooling Load and Heating Load. You will follow the five-task graph above with some slight differences in the generated artifacts.You will now build the component to load your data into the pipeline. The code is shown below and we will discuss the syntax in more detail after running it.
###Code
@component(
packages_to_install=["pandas", "openpyxl"],
output_component_file="download_data_component.yaml"
)
def download_data(url:str, output_csv:Output[Dataset]):
import pandas as pd
# Use pandas excel reader
df = pd.read_excel(url)
df = df.sample(frac=1).reset_index(drop=True)
df.to_csv(output_csv.path, index=False)
###Output
_____no_output_____
###Markdown
When building a component, it's good to determine first its inputs and outputs.* The dataset you want to download is an Excel file hosted by UCI [here](https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx) and you can load that using Pandas. Instead of hardcoding the URL in your code, you can design your function to accept an *input* string parameter so you can use other URLs in case the data has been transferred. * For the *output*, you will want to pass the downloaded dataset to the next task (i.e. data splitting). You should assign this as an `Output` type and specify what kind of artifact it is. Kubeflow provides [several of these](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.py) such as `Dataset`, `Model`, `Metrics`, etc. All artifacts are saved by Kubeflow to a storage server. For local deployments, the default will be a [MinIO](https://min.io/) server. The [path](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.pyL51) property fetches the location where this artifact will be saved and that's what you did above when you called `df.to_csv(output_csv.path, index=False)`The inputs and outputs are declared as parameters in the function definition. As you can see in the code we defined a `url` parameter with a `str` type and an `output_csv` parameter with an `Output[Dataset]` type.Lastly, you'll need to use the `component` decorator to specify that this is a Kubeflow Pipeline component. The [documentation](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/component_decorator.pyL23) shows several parameters you can set and two of them are used in the code above. As the name suggests, the `packages_to_install` argument declares any extra packages outside the base image that is needed to run your code. As of writing, the default base image is `python:3.7` so you'll need `pandas` and `openpyxl` to load the Excel file. The `output_component_file` is an output file that contains the specification for your newly built component. You should see it in the Colab file explorer once you've ran the cell above. You'll see your code there and other settings that pertain to your component. You can use this file when building other pipelines if necessary. You don't have to redo your code again in a notebook in your next project as long as you have this YAML file. You can also pass this to your team members or use it in another machine. Kubeflow also hosts other reusable modules in their repo [here](https://github.com/kubeflow/pipelines/tree/master/components). For example, if you want a file downloader component in one of your projects, you can load the component from that repo using the [load_component_from_url](https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.htmlkfp.components.ComponentStore.load_component_from_url) function as shown below. The [YAML file](https://raw.githubusercontent.com/kubeflow/pipelines/master/components/web/Download/component-sdk-v2.yaml) of that component should tell you the inputs and outputs so you can use it accordingly.```web_downloader_op = kfp.components.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/master/components/web/Download/component-sdk-v2.yaml')``` Next, you will build the next component in the pipeline. Like in the previous step, you should design it first with inputs and outputs in mind. You know that the input of this component will come from the artifact generated by the `download_data()` function above. To declare input artifacts, you can annotate your parameter with the `Input[Dataset]` data type as shown below. For the outputs, you want to have two: train and test datasets. You can see the implementation below:
###Code
@component(
packages_to_install=["pandas", "sklearn"],
output_component_file="split_data_component.yaml"
)
def split_data(input_csv: Input[Dataset], train_csv: Output[Dataset], test_csv: Output[Dataset]):
import pandas as pd
from sklearn.model_selection import train_test_split
df = pd.read_csv(input_csv.path)
train, test = train_test_split(df, test_size=0.2)
train.to_csv(train_csv.path, index=False)
test.to_csv(test_csv.path, index=False)
###Output
_____no_output_____
###Markdown
Building and Running a Pipeline Now that you have at least two components, you can try building a pipeline just to quickly see how it works. The code is shown below. Basically, you just define a function with the sequence of steps then use the `dsl.pipeline` decorator. Notice in the last line (i.e. `split_data_task`) that to get a particular artifact from a previous step, you will need to use the `outputs` dictionary and use the parameter name as the key.
###Code
@dsl.pipeline(
name="my-pipeline",
)
def my_pipeline(url: str):
download_data_task = download_data(url=url)
split_data_task = split_data(input_csv=download_data_task.outputs['output_csv'])
###Output
_____no_output_____
###Markdown
To generate your pipeline specification file, you need to compile your pipeline function using the [`Compiler`](https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.compiler.htmlkfp.compiler.Compiler) class as shown below.
###Code
kfp.compiler.Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(
pipeline_func=my_pipeline,
package_path='pipeline.yaml')
###Output
_____no_output_____
###Markdown
After running the cell, you'll see a `pipeline.yaml` file in the Colab file explorer. Please download that because it will be needed in the next step.You can run a pipeline programmatically or from the UI. For this exercise, you will do it from the UI and you will see how it is done programmatically in the Qwiklabs later this week. Please go back to the Kubeflow Pipelines UI and click `Upload Pipelines` from the `Pipelines` page.Next, select `Upload a file` and choose the `pipeline.yaml` you downloaded earlier then click `Create`. This will open a screen showing your simple DAG (just two tasks). Click `Create Run` then scroll to the bottom to input the URL of the Excel file: https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx . Then Click `Start`.Select the topmost entry in the `Runs` page and you should see the progress of your run. You can click on the `download-data` box to see more details about that particular task (i.e. the URL input and the container logs). After it turns green, you should also see the output artifact and you can download it if you want by clicking the minio link. Eventually, both tasks will turn green indicating that the run completed successfully. Nicely done! Generate the rest of the components Now that you've seen a sample workflow, you can build the rest of the components for preprocessing, model training, and model evaluation. The functions will be longer because the task is more complex. Nonetheless, it follows the same principles as before such as declaring inputs and outputs, and specifying the additional packages.In the `eval_model()` function, you'll notice the use of the [`log_metric()`](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.pyL123) to record the results. You'll see this in the `Visualizations` tab of that task after it has completed.
###Code
@component(
packages_to_install=["pandas", "numpy"],
output_component_file="preprocess_data_component.yaml"
)
def preprocess_data(input_train_csv: Input[Dataset], input_test_csv: Input[Dataset],
output_train_x: Output[Dataset], output_test_x: Output[Dataset],
output_train_y: Output[Artifact], output_test_y: Output[Artifact]):
import pandas as pd
import numpy as np
import pickle
def format_output(data):
y1 = data.pop('Y1')
y1 = np.array(y1)
y2 = data.pop('Y2')
y2 = np.array(y2)
return y1, y2
def norm(x, train_stats):
return (x - train_stats['mean']) / train_stats['std']
train = pd.read_csv(input_train_csv.path)
test = pd.read_csv(input_test_csv.path)
train_stats = train.describe()
# Get Y1 and Y2 as the 2 outputs and format them as np arrays
train_stats.pop('Y1')
train_stats.pop('Y2')
train_stats = train_stats.transpose()
train_Y = format_output(train)
with open(output_train_y.path, "wb") as file:
pickle.dump(train_Y, file)
test_Y = format_output(test)
with open(output_test_y.path, "wb") as file:
pickle.dump(test_Y, file)
# Normalize the training and test data
norm_train_X = norm(train, train_stats)
norm_test_X = norm(test, train_stats)
norm_train_X.to_csv(output_train_x.path, index=False)
norm_test_X.to_csv(output_test_x.path, index=False)
@component(
packages_to_install=["tensorflow", "pandas"],
output_component_file="train_model_component.yaml"
)
def train_model(input_train_x: Input[Dataset], input_train_y: Input[Artifact],
output_model: Output[Model], output_history: Output[Artifact]):
import pandas as pd
import tensorflow as tf
import pickle
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input
norm_train_X = pd.read_csv(input_train_x.path)
with open(input_train_y.path, "rb") as file:
train_Y = pickle.load(file)
def model_builder(train_X):
# Define model layers.
input_layer = Input(shape=(len(train_X.columns),))
first_dense = Dense(units='128', activation='relu')(input_layer)
second_dense = Dense(units='128', activation='relu')(first_dense)
# Y1 output will be fed directly from the second dense
y1_output = Dense(units='1', name='y1_output')(second_dense)
third_dense = Dense(units='64', activation='relu')(second_dense)
# Y2 output will come via the third dense
y2_output = Dense(units='1', name='y2_output')(third_dense)
# Define the model with the input layer and a list of output layers
model = Model(inputs=input_layer, outputs=[y1_output, y2_output])
print(model.summary())
return model
model = model_builder(norm_train_X)
# Specify the optimizer, and compile the model with loss functions for both outputs
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
model.compile(optimizer=optimizer,
loss={'y1_output': 'mse', 'y2_output': 'mse'},
metrics={'y1_output': tf.keras.metrics.RootMeanSquaredError(),
'y2_output': tf.keras.metrics.RootMeanSquaredError()})
# Train the model for 500 epochs
history = model.fit(norm_train_X, train_Y, epochs=100, batch_size=10)
model.save(output_model.path)
with open(output_history.path, "wb") as file:
train_Y = pickle.dump(history.history, file)
@component(
packages_to_install=["tensorflow", "pandas"],
output_component_file="eval_model_component.yaml"
)
def eval_model(input_model: Input[Model], input_history: Input[Artifact],
input_test_x: Input[Dataset], input_test_y: Input[Artifact],
MLPipeline_Metrics: Output[Metrics]):
import pandas as pd
import tensorflow as tf
import pickle
model = tf.keras.models.load_model(input_model.path)
norm_test_X = pd.read_csv(input_test_x.path)
with open(input_test_y.path, "rb") as file:
test_Y = pickle.load(file)
# Test the model and print loss and mse for both outputs
loss, Y1_loss, Y2_loss, Y1_rmse, Y2_rmse = model.evaluate(x=norm_test_X, y=test_Y)
print("Loss = {}, Y1_loss = {}, Y1_mse = {}, Y2_loss = {}, Y2_mse = {}".format(loss, Y1_loss, Y1_rmse, Y2_loss, Y2_rmse))
MLPipeline_Metrics.log_metric("loss", loss)
MLPipeline_Metrics.log_metric("Y1_loss", Y1_loss)
MLPipeline_Metrics.log_metric("Y2_loss", Y2_loss)
MLPipeline_Metrics.log_metric("Y1_rmse", Y1_rmse)
MLPipeline_Metrics.log_metric("Y2_rmse", Y2_rmse)
###Output
_____no_output_____
###Markdown
Build and run the complete pipeline You can then build and run the entire pipeline as you did earlier. It will take around 20 minutes for all the tasks to complete and you can see the `Logs` tab of each task to see how it's going. For instance, you can see there the model training epochs as you normally see in a notebook environment.
###Code
# Define a pipeline and create a task from a component:
@dsl.pipeline(
name="my-pipeline",
)
def my_pipeline(url: str):
download_data_task = download_data(url=url)
split_data_task = split_data(input_csv=download_data_task.outputs['output_csv'])
preprocess_data_task = preprocess_data(input_train_csv=split_data_task.outputs['train_csv'],
input_test_csv=split_data_task.outputs['test_csv'])
train_model_task = train_model(input_train_x=preprocess_data_task.outputs["output_train_x"],
input_train_y=preprocess_data_task.outputs["output_train_y"])
eval_model_task = eval_model(input_model=train_model_task.outputs["output_model"],
input_history=train_model_task.outputs["output_history"],
input_test_x=preprocess_data_task.outputs["output_test_x"],
input_test_y=preprocess_data_task.outputs["output_test_y"])
kfp.compiler.Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(
pipeline_func=my_pipeline,
package_path='pipeline.yaml')
###Output
_____no_output_____
###Markdown
Ungraded Lab: Building ML Pipelines with Kubeflow In this lab, you will have some hands-on practice with [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/pipelines-overview/). As mentioned in the lectures, modern ML engineering is moving towards pipeline automation for rapid iteration and experiment tracking. This is especially useful in production deployments where models need to be frequently retrained to catch trends in newer data.Kubeflow Pipelines is one component of the [Kubeflow](https://www.kubeflow.org/) suite of tools for machine learning workflows. It is deployed on top of a Kubernetes cluster and builds an infrastructure for orchestrating ML pipelines and monitoring inputs and outputs of each component. You will use this tool in Google Cloud Platform in the first assignment this week and this lab will help prepare you for that by exploring its features on a local deployment. In particular, you will:* setup [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/pipelines-overview/) in your local workstation* get familiar with the Kubeflow Pipelines UI* build pipeline components with Python and the Kubeflow Pipelines SDK* run an ML pipeline with Kubeflow PipelinesLet's begin! SetupYou will need these tool installed in your local machine to complete the exercises:1. Docker - platform for building and running containerized applications. You should already have this installed from the previous ungraded labs. If not, you can see the instructions [here](https://docs.docker.com/get-docker/). If you are using Docker for Desktop (Mac or Windows), you may need to increase the resource limits to start Kubeflow Pipelines later. You can click on the Docker icon in your Task Bar, choose `Preferences` and adjust the CPU to 4, Storage to 50GB, and the memory to at least 4GB (8GB recommended). Just make sure you are not maxing out any of these limits (i.e. the slider should ideally be at the midpoint or less) since it can make your machine slow or unresponsive. If you're constrained on resources, don't worry. You can still use this notebook as reference since we'll show the expected outputs at each step. The important thing is to become familiar with this Kubeflow Pipelines before you get more hands-on in the assignment. 2. kubectl - tool for running commands on Kubernetes clusters. This should also be installed from the previous labs. If not, please see the instructions [here](https://kubernetes.io/docs/tasks/tools/)3. [kind](https://kind.sigs.k8s.io/) - a Kubernetes distribution for running local clusters using Docker. Please follow the instructions [here](https://www.kubeflow.org/docs/components/pipelines/installation/localcluster-deployment/kind) to install kind and create a local cluster.4. Kubeflow Pipelines - a platform for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers. Once you've created a local cluster using kind, you can deploy Kubeflow Pipelines with these commands.```export PIPELINE_VERSION=1.7.0kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.iokubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/platform-agnostic-pns?ref=$PIPELINE_VERSION"```You can enter the commands above one line at a time. These will setup all the deployments and spin up the pods for the entire application. These will be found in the `kubeflow` namespace. After sending the last command, it will take a moment (around 30 minutes) for all the deployments to be ready. You can send the command `kubectl get deploy -n kubeflow` a few times to check the status. You should see all deployments with the `READY` status before you can proceed to the next section.```NAME READY UP-TO-DATE AVAILABLE AGEcache-deployer-deployment 1/1 1 1 21hcache-server 1/1 1 1 21hmetadata-envoy-deployment 1/1 1 1 21hmetadata-grpc-deployment 1/1 1 1 21hmetadata-writer 1/1 1 1 21hminio 1/1 1 1 21hml-pipeline 1/1 1 1 21hml-pipeline-persistenceagent 1/1 1 1 21hml-pipeline-scheduledworkflow 1/1 1 1 21hml-pipeline-ui 1/1 1 1 21hml-pipeline-viewer-crd 1/1 1 1 21hml-pipeline-visualizationserver 1/1 1 1 21hmysql 1/1 1 1 21hworkflow-controller 1/1 1 1 21h```When everything is ready, you can run the following command to access the `ml-pipeline-ui` service.```kubectl port-forward -n kubeflow svc/ml-pipeline-ui 8080:80```The terminal should respond with something like this:```Forwarding from 127.0.0.1:8080 -> 3000Forwarding from [::1]:8080 -> 3000```You can then open your browser and go to `http://localhost:8080` to see the user interface. Operationalizing your ML PipelinesAs you know, generating a trained model involves executing a sequence of steps. Here is a high level overview of what these steps might look like:You can recall the very first model you ever built and more likely than not, your code then also followed a similar flow. In essence, building an ML pipeline mainly involves implementing these steps but you will need to optimize your operations to deliver value to your team. Platforms such as Kubeflow helps you to build ML pipelines that can be automated, reproducible, and easily monitored. You will see these as you build your pipeline in the next sections below. Pipeline componentsThe main building blocks of your ML pipeline are referred to as [components](https://www.kubeflow.org/docs/components/pipelines/overview/concepts/component/). In the context of Kubeflow, these are containerized applications that run a specific task in the pipeline. Moreover, these components generate and consume *artifacts* from other components. For example, a download task will generate a dataset artifact and this will be consumed by a data splitting task. If you go back to the simple pipeline image above and describe it using tasks and artifacts, it will look something like this:This relationship between tasks and their artifacts are what constitutes a pipeline and is also called a [directed acyclic graph (DAG)](https://en.wikipedia.org/wiki/Directed_acyclic_graph).Kubeflow Pipelines let's you create components either by [building the component specification directly](https://www.kubeflow.org/docs/components/pipelines/sdk/component-development/component-spec) or through [Python functions](https://www.kubeflow.org/docs/components/pipelines/sdk/python-function-components/). For this lab, you will use the latter since it is more intuitive and allows for quick iteration. As you gain more experience, you can explore building the component specification directly especially if you want to use different languages other than Python.You will begin by installing the Kubeflow Pipelines SDK. Remember to restart the runtime to load the newly installed modules in Colab.
###Code
# Install the KFP SDK
!pip install --upgrade kfp
###Output
_____no_output_____
###Markdown
**Note:** *Please do not proceed to the next steps without restarting the Runtime after installing `kfp`. You can do that by either pressing the `Restart Runtime` button at the end of the cell output above, or going to the `Runtime` button at the Colab toolbar above and selecting `Restart Runtime`.* Now you will import the modules you will be using to construct the Kubeflow pipeline. You will know more what these are for in the next sections.
###Code
# Import the modules you will use
import kfp
# For creating the pipeline
from kfp.v2 import dsl
# For building components
from kfp.v2.dsl import component
# Type annotations for the component artifacts
from kfp.v2.dsl import (
Input,
Output,
Artifact,
Dataset,
Model,
Metrics
)
###Output
_____no_output_____
###Markdown
In this lab, you will build a pipeline to train a multi-output model trained on the [Energy Effeciency dataset from the UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Energy+efficiency). It uses the bulding features (e.g. wall area, roof area) as inputs and has two outputs: Cooling Load and Heating Load. You will follow the five-task graph above with some slight differences in the generated artifacts.You will now build the component to load your data into the pipeline. The code is shown below and we will discuss the syntax in more detail after running it.
###Code
@component(
packages_to_install=["pandas", "openpyxl"],
output_component_file="download_data_component.yaml"
)
def download_data(url:str, output_csv:Output[Dataset]):
import pandas as pd
# Use pandas excel reader
df = pd.read_excel(url)
df = df.sample(frac=1).reset_index(drop=True)
df.to_csv(output_csv.path, index=False)
###Output
_____no_output_____
###Markdown
When building a component, it's good to determine first its inputs and outputs.* The dataset you want to download is an Excel file hosted by UCI [here](https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx) and you can load that using Pandas. Instead of hardcoding the URL in your code, you can design your function to accept an *input* string parameter so you can use other URLs in case the data has been transferred. * For the *output*, you will want to pass the downloaded dataset to the next task (i.e. data splitting). You should assign this as an `Output` type and specify what kind of artifact it is. Kubeflow provides [several of these](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.py) such as `Dataset`, `Model`, `Metrics`, etc. All artifacts are saved by Kubeflow to a storage server. For local deployments, the default will be a [MinIO](https://min.io/) server. The [path](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.pyL51) property fetches the location where this artifact will be saved and that's what you did above when you called `df.to_csv(output_csv.path, index=False)`The inputs and outputs are declared as parameters in the function definition. As you can see in the code we defined a `url` parameter with a `str` type and an `output_csv` parameter with an `Output[Dataset]` type.Lastly, you'll need to use the `component` decorator to specify that this is a Kubeflow Pipeline component. The [documentation](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/component_decorator.pyL23) shows several parameters you can set and two of them are used in the code above. As the name suggests, the `packages_to_install` argument declares any extra packages outside the base image that is needed to run your code. As of writing, the default base image is `python:3.7` so you'll need `pandas` and `openpyxl` to load the Excel file. The `output_component_file` is an output file that contains the specification for your newly built component. You should see it in the Colab file explorer once you've ran the cell above. You'll see your code there and other settings that pertain to your component. You can use this file when building other pipelines if necessary. You don't have to redo your code again in a notebook in your next project as long as you have this YAML file. You can also pass this to your team members or use it in another machine. Kubeflow also hosts other reusable modules in their repo [here](https://github.com/kubeflow/pipelines/tree/master/components). For example, if you want a file downloader component in one of your projects, you can load the component from that repo using the [load_component_from_url](https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.htmlkfp.components.ComponentStore.load_component_from_url) function as shown below. The [YAML file](https://raw.githubusercontent.com/kubeflow/pipelines/master/components/web/Download/component-sdk-v2.yaml) of that component should tell you the inputs and outputs so you can use it accordingly.```web_downloader_op = kfp.components.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/master/components/web/Download/component-sdk-v2.yaml')``` Next, you will build the next component in the pipeline. Like in the previous step, you should design it first with inputs and outputs in mind. You know that the input of this component will come from the artifact generated by the `download_data()` function above. To declare input artifacts, you can annotate your parameter with the `Input[Dataset]` data type as shown below. For the outputs, you want to have two: train and test datasets. You can see the implementation below:
###Code
@component(
packages_to_install=["pandas", "sklearn"],
output_component_file="split_data_component.yaml"
)
def split_data(input_csv: Input[Dataset], train_csv: Output[Dataset], test_csv: Output[Dataset]):
import pandas as pd
from sklearn.model_selection import train_test_split
df = pd.read_csv(input_csv.path)
train, test = train_test_split(df, test_size=0.2)
train.to_csv(train_csv.path, index=False)
test.to_csv(test_csv.path, index=False)
###Output
_____no_output_____
###Markdown
Building and Running a Pipeline Now that you have at least two components, you can try building a pipeline just to quickly see how it works. The code is shown below. Basically, you just define a function with the sequence of steps then use the `dsl.pipeline` decorator. Notice in the last line (i.e. `split_data_task`) that to get a particular artifact from a previous step, you will need to use the `outputs` dictionary and use the parameter name as the key.
###Code
@dsl.pipeline(
name="my-pipeline",
)
def my_pipeline(url: str):
download_data_task = download_data(url=url)
split_data_task = split_data(input_csv=download_data_task.outputs['output_csv'])
###Output
_____no_output_____
###Markdown
To generate your pipeline specification file, you need to compile your pipeline function using the [`Compiler`](https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.compiler.htmlkfp.compiler.Compiler) class as shown below.
###Code
kfp.compiler.Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(
pipeline_func=my_pipeline,
package_path='pipeline.yaml')
###Output
_____no_output_____
###Markdown
After running the cell, you'll see a `pipeline.yaml` file in the Colab file explorer. Please download that because it will be needed in the next step.You can run a pipeline programmatically or from the UI. For this exercise, you will do it from the UI and you will see how it is done programmatically in the Qwiklabs later this week. Please go back to the Kubeflow Pipelines UI and click `Upload Pipelines` from the `Pipelines` page.Next, select `Upload a file` and choose the `pipeline.yaml` you downloaded earlier then click `Create`. This will open a screen showing your simple DAG (just two tasks). Click `Create Run` then scroll to the bottom to input the URL of the Excel file: https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx . Then Click `Start`.Select the topmost entry in the `Runs` page and you should see the progress of your run. You can click on the `download-data` box to see more details about that particular task (i.e. the URL input and the container logs). After it turns green, you should also see the output artifact and you can download it if you want by clicking the minio link. Eventually, both tasks will turn green indicating that the run completed successfully. Nicely done! Generate the rest of the components Now that you've seen a sample workflow, you can build the rest of the components for preprocessing, model training, and model evaluation. The functions will be longer because the task is more complex. Nonetheless, it follows the same principles as before such as declaring inputs and outputs, and specifying the additional packages.In the `eval_model()` function, you'll notice the use of the [`log_metric()`](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.pyL123) to record the results. You'll see this in the `Visualizations` tab of that task after it has completed.
###Code
@component(
packages_to_install=["pandas", "numpy"],
output_component_file="preprocess_data_component.yaml"
)
def preprocess_data(input_train_csv: Input[Dataset], input_test_csv: Input[Dataset],
output_train_x: Output[Dataset], output_test_x: Output[Dataset],
output_train_y: Output[Artifact], output_test_y: Output[Artifact]):
import pandas as pd
import numpy as np
import pickle
def format_output(data):
y1 = data.pop('Y1')
y1 = np.array(y1)
y2 = data.pop('Y2')
y2 = np.array(y2)
return y1, y2
def norm(x, train_stats):
return (x - train_stats['mean']) / train_stats['std']
train = pd.read_csv(input_train_csv.path)
test = pd.read_csv(input_test_csv.path)
train_stats = train.describe()
# Get Y1 and Y2 as the 2 outputs and format them as np arrays
train_stats.pop('Y1')
train_stats.pop('Y2')
train_stats = train_stats.transpose()
train_Y = format_output(train)
with open(output_train_y.path, "wb") as file:
pickle.dump(train_Y, file)
test_Y = format_output(test)
with open(output_test_y.path, "wb") as file:
pickle.dump(test_Y, file)
# Normalize the training and test data
norm_train_X = norm(train, train_stats)
norm_test_X = norm(test, train_stats)
norm_train_X.to_csv(output_train_x.path, index=False)
norm_test_X.to_csv(output_test_x.path, index=False)
@component(
packages_to_install=["tensorflow", "pandas"],
output_component_file="train_model_component.yaml"
)
def train_model(input_train_x: Input[Dataset], input_train_y: Input[Artifact],
output_model: Output[Model], output_history: Output[Artifact]):
import pandas as pd
import tensorflow as tf
import pickle
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input
norm_train_X = pd.read_csv(input_train_x.path)
with open(input_train_y.path, "rb") as file:
train_Y = pickle.load(file)
def model_builder(train_X):
# Define model layers.
input_layer = Input(shape=(len(train_X.columns),))
first_dense = Dense(units='128', activation='relu')(input_layer)
second_dense = Dense(units='128', activation='relu')(first_dense)
# Y1 output will be fed directly from the second dense
y1_output = Dense(units='1', name='y1_output')(second_dense)
third_dense = Dense(units='64', activation='relu')(second_dense)
# Y2 output will come via the third dense
y2_output = Dense(units='1', name='y2_output')(third_dense)
# Define the model with the input layer and a list of output layers
model = Model(inputs=input_layer, outputs=[y1_output, y2_output])
print(model.summary())
return model
model = model_builder(norm_train_X)
# Specify the optimizer, and compile the model with loss functions for both outputs
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
model.compile(optimizer=optimizer,
loss={'y1_output': 'mse', 'y2_output': 'mse'},
metrics={'y1_output': tf.keras.metrics.RootMeanSquaredError(),
'y2_output': tf.keras.metrics.RootMeanSquaredError()})
# Train the model for 500 epochs
history = model.fit(norm_train_X, train_Y, epochs=100, batch_size=10)
model.save(output_model.path)
with open(output_history.path, "wb") as file:
train_Y = pickle.dump(history.history, file)
@component(
packages_to_install=["tensorflow", "pandas"],
output_component_file="eval_model_component.yaml"
)
def eval_model(input_model: Input[Model], input_history: Input[Artifact],
input_test_x: Input[Dataset], input_test_y: Input[Artifact],
MLPipeline_Metrics: Output[Metrics]):
import pandas as pd
import tensorflow as tf
import pickle
model = tf.keras.models.load_model(input_model.path)
norm_test_X = pd.read_csv(input_test_x.path)
with open(input_test_y.path, "rb") as file:
test_Y = pickle.load(file)
# Test the model and print loss and mse for both outputs
loss, Y1_loss, Y2_loss, Y1_rmse, Y2_rmse = model.evaluate(x=norm_test_X, y=test_Y)
print("Loss = {}, Y1_loss = {}, Y1_mse = {}, Y2_loss = {}, Y2_mse = {}".format(loss, Y1_loss, Y1_rmse, Y2_loss, Y2_rmse))
MLPipeline_Metrics.log_metric("loss", loss)
MLPipeline_Metrics.log_metric("Y1_loss", Y1_loss)
MLPipeline_Metrics.log_metric("Y2_loss", Y2_loss)
MLPipeline_Metrics.log_metric("Y1_rmse", Y1_rmse)
MLPipeline_Metrics.log_metric("Y2_rmse", Y2_rmse)
###Output
_____no_output_____
###Markdown
Build and run the complete pipeline You can then build and run the entire pipeline as you did earlier. It will take around 20 minutes for all the tasks to complete and you can see the `Logs` tab of each task to see how it's going. For instance, you can see there the model training epochs as you normally see in a notebook environment.
###Code
# Define a pipeline and create a task from a component:
@dsl.pipeline(
name="my-pipeline",
)
def my_pipeline(url: str):
download_data_task = download_data(url=url)
split_data_task = split_data(input_csv=download_data_task.outputs['output_csv'])
preprocess_data_task = preprocess_data(input_train_csv=split_data_task.outputs['train_csv'],
input_test_csv=split_data_task.outputs['test_csv'])
train_model_task = train_model(input_train_x=preprocess_data_task.outputs["output_train_x"],
input_train_y=preprocess_data_task.outputs["output_train_y"])
eval_model_task = eval_model(input_model=train_model_task.outputs["output_model"],
input_history=train_model_task.outputs["output_history"],
input_test_x=preprocess_data_task.outputs["output_test_x"],
input_test_y=preprocess_data_task.outputs["output_test_y"])
kfp.compiler.Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(
pipeline_func=my_pipeline,
package_path='pipeline.yaml')
###Output
_____no_output_____
###Markdown
Ungraded Lab: Building ML Pipelines with Kubeflow In this lab, you will have some hands-on practice with [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/pipelines-overview/). As mentioned in the lectures, modern ML engineering is moving towards pipeline automation for rapid iteration and experiment tracking. This is especially useful in production deployments where models need to be frequently retrained to catch trends in newer data.Kubeflow Pipelines is one component of the [Kubeflow](https://www.kubeflow.org/) suite of tools for machine learning workflows. It is deployed on top of a Kubernetes cluster and builds an infrastructure for orchestrating ML pipelines and monitoring inputs and outputs of each component. You will use this tool in Google Cloud Platform in the first assignment this week and this lab will help prepare you for that by exploring its features on a local deployment. In particular, you will:* setup [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/pipelines-overview/) in your local workstation* get familiar with the Kubeflow Pipelines UI* build pipeline components with Python and the Kubeflow Pipelines SDK* run an ML pipeline with Kubeflow PipelinesLet's begin! SetupYou will need these tool installed in your local machine to complete the exercises:1. Docker - platform for building and running containerized applications. You should already have this installed from the previous ungraded labs. If not, you can see the instructions [here](https://docs.docker.com/get-docker/). If you are using Docker for Desktop (Mac or Windows), you may need to increase the resource limits to start Kubeflow Pipelines later. You can click on the Docker icon in your Task Bar, choose `Preferences` and adjust the CPU to 4, Storage to 50GB, and the memory to at least 4GB (8GB recommended). Just make sure you are not maxing out any of these limits (i.e. the slider should ideally be at the midpoint or less) since it can make your machine slow or unresponsive. If you're constrained on resources, don't worry. You can still use this notebook as reference since we'll show the expected outputs at each step. The important thing is to become familiar with this Kubeflow Pipelines before you get more hands-on in the assignment. 2. kubectl - tool for running commands on Kubernetes clusters. This should also be installed from the previous labs. If not, please see the instructions [here](https://kubernetes.io/docs/tasks/tools/)3. [kind](https://kind.sigs.k8s.io/) - a Kubernetes distribution for running local clusters using Docker. Please follow the instructions [here](https://www.kubeflow.org/docs/components/pipelines/installation/localcluster-deployment/kind) to install kind and create a local cluster.4. Kubeflow Pipelines - a platform for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers. Once you've created a local cluster using kind, you can deploy Kubeflow Pipelines with these commands.```export PIPELINE_VERSION=1.7.0kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.iokubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/platform-agnostic-pns?ref=$PIPELINE_VERSION"```You can enter the commands above one line at a time. These will setup all the deployments and spin up the pods for the entire application. These will be found in the `kubeflow` namespace. After sending the last command, it will take a moment (around 30 minutes) for all the deployments to be ready. You can send the command `kubectl get deploy -n kubeflow` a few times to check the status. You should see all deployments with the `READY` status before you can proceed to the next section.```NAME READY UP-TO-DATE AVAILABLE AGEcache-deployer-deployment 1/1 1 1 21hcache-server 1/1 1 1 21hmetadata-envoy-deployment 1/1 1 1 21hmetadata-grpc-deployment 1/1 1 1 21hmetadata-writer 1/1 1 1 21hminio 1/1 1 1 21hml-pipeline 1/1 1 1 21hml-pipeline-persistenceagent 1/1 1 1 21hml-pipeline-scheduledworkflow 1/1 1 1 21hml-pipeline-ui 1/1 1 1 21hml-pipeline-viewer-crd 1/1 1 1 21hml-pipeline-visualizationserver 1/1 1 1 21hmysql 1/1 1 1 21hworkflow-controller 1/1 1 1 21h```When everything is ready, you can run the following command to access the `ml-pipeline-ui` service.```kubectl port-forward -n kubeflow svc/ml-pipeline-ui 8080:80```The terminal should respond with something like this:```Forwarding from 127.0.0.1:8080 -> 3000Forwarding from [::1]:8080 -> 3000```You can then open your browser and go to `http://localhost:8080` to see the user interface. Operationalizing your ML PipelinesAs you know, generating a trained model involves executing a sequence of steps. Here is a high level overview of what these steps might look like:You can recall the very first model you ever built and more likely than not, your code then also followed a similar flow. In essence, building an ML pipeline mainly involves implementing these steps but you will need to optimize your operations to deliver value to your team. Platforms such as Kubeflow helps you to build ML pipelines that can be automated, reproducible, and easily monitored. You will see these as you build your pipeline in the next sections below. Pipeline componentsThe main building blocks of your ML pipeline are referred to as [components](https://www.kubeflow.org/docs/components/pipelines/overview/concepts/component/). In the context of Kubeflow, these are containerized applications that run a specific task in the pipeline. Moreover, these components generate and consume *artifacts* from other components. For example, a download task will generate a dataset artifact and this will be consumed by a data splitting task. If you go back to the simple pipeline image above and describe it using tasks and artifacts, it will look something like this:This relationship between tasks and their artifacts are what constitutes a pipeline and is also called a [directed acyclic graph (DAG)](https://en.wikipedia.org/wiki/Directed_acyclic_graph).Kubeflow Pipelines let's you create components either by [building the component specification directly](https://www.kubeflow.org/docs/components/pipelines/sdk/component-development/component-spec) or through [Python functions](https://www.kubeflow.org/docs/components/pipelines/sdk/python-function-components/). For this lab, you will use the latter since it is more intuitive and allows for quick iteration. As you gain more experience, you can explore building the component specification directly especially if you want to use different languages other than Python.You will begin by installing the Kubeflow Pipelines SDK. Remember to restart the runtime to load the newly installed modules in Colab.
###Code
# Install the KFP SDK
!pip install --upgrade kfp
###Output
_____no_output_____
###Markdown
**Note:** *Please do not proceed to the next steps without restarting the Runtime after installing `kfp`. You can do that by either pressing the `Restart Runtime` button at the end of the cell output above, or going to the `Runtime` button at the Colab toolbar above and selecting `Restart Runtime`.* Now you will import the modules you will be using to construct the Kubeflow pipeline. You will know more what these are for in the next sections.
###Code
# Import the modules you will use
import kfp
# For creating the pipeline
from kfp.v2 import dsl
# For building components
from kfp.v2.dsl import component
# Type annotations for the component artifacts
from kfp.v2.dsl import (
Input,
Output,
Artifact,
Dataset,
Model,
Metrics
)
###Output
_____no_output_____
###Markdown
In this lab, you will build a pipeline to train a multi-output model trained on the [Energy Effeciency dataset from the UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Energy+efficiency). It uses the bulding features (e.g. wall area, roof area) as inputs and has two outputs: Cooling Load and Heating Load. You will follow the five-task graph above with some slight differences in the generated artifacts.You will now build the component to load your data into the pipeline. The code is shown below and we will discuss the syntax in more detail after running it.
###Code
@component(
packages_to_install=["pandas", "openpyxl"],
output_component_file="download_data_component.yaml"
)
def download_data(url:str, output_csv:Output[Dataset]):
import pandas as pd
# Use pandas excel reader
df = pd.read_excel(url)
df = df.sample(frac=1).reset_index(drop=True)
df.to_csv(output_csv.path, index=False)
###Output
_____no_output_____
###Markdown
When building a component, it's good to determine first its inputs and outputs.* The dataset you want to download is an Excel file hosted by UCI [here](https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx) and you can load that using Pandas. Instead of hardcoding the URL in your code, you can design your function to accept an *input* string parameter so you can use other URLs in case the data has been transferred. * For the *output*, you will want to pass the downloaded dataset to the next task (i.e. data splitting). You should assign this as an `Output` type and specify what kind of artifact it is. Kubeflow provides [several of these](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.py) such as `Dataset`, `Model`, `Metrics`, etc. All artifacts are saved by Kubeflow to a storage server. For local deployments, the default will be a [MinIO](https://min.io/) server. The [path](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.pyL51) property fetches the location where this artifact will be saved and that's what you did above when you called `df.to_csv(output_csv.path, index=False)`The inputs and outputs are declared as parameters in the function definition. As you can see in the code we defined a `url` parameter with a `str` type and an `output_csv` parameter with an `Output[Dataset]` type.Lastly, you'll need to use the `component` decorator to specify that this is a Kubeflow Pipeline component. The [documentation](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/component_decorator.pyL23) shows several parameters you can set and two of them are used in the code above. As the name suggests, the `packages_to_install` argument declares any extra packages outside the base image that is needed to run your code. As of writing, the default base image is `python:3.7` so you'll need `pandas` and `openpyxl` to load the Excel file. The `output_component_file` is an output file that contains the specification for your newly built component. You should see it in the Colab file explorer once you've ran the cell above. You'll see your code there and other settings that pertain to your component. You can use this file when building other pipelines if necessary. You don't have to redo your code again in a notebook in your next project as long as you have this YAML file. You can also pass this to your team members or use it in another machine. Kubeflow also hosts other reusable modules in their repo [here](https://github.com/kubeflow/pipelines/tree/master/components). For example, if you want a file downloader component in one of your projects, you can load the component from that repo using the [load_component_from_url](https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.htmlkfp.components.ComponentStore.load_component_from_url) function as shown below. The [YAML file](https://raw.githubusercontent.com/kubeflow/pipelines/master/components/web/Download/component-sdk-v2.yaml) of that component should tell you the inputs and outputs so you can use it accordingly.```web_downloader_op = kfp.components.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/master/components/web/Download/component-sdk-v2.yaml')``` Next, you will build the next component in the pipeline. Like in the previous step, you should design it first with inputs and outputs in mind. You know that the input of this component will come from the artifact generated by the `download_data()` function above. To declare input artifacts, you can annotate your parameter with the `Input[Dataset]` data type as shown below. For the outputs, you want to have two: train and test datasets. You can see the implementation below:
###Code
@component(
packages_to_install=["pandas", "sklearn"],
output_component_file="split_data_component.yaml"
)
def split_data(input_csv: Input[Dataset], train_csv: Output[Dataset], test_csv: Output[Dataset]):
import pandas as pd
from sklearn.model_selection import train_test_split
df = pd.read_csv(input_csv.path)
train, test = train_test_split(df, test_size=0.2)
train.to_csv(train_csv.path, index=False)
test.to_csv(test_csv.path, index=False)
###Output
_____no_output_____
###Markdown
Building and Running a Pipeline Now that you have at least two components, you can try building a pipeline just to quickly see how it works. The code is shown below. Basically, you just define a function with the sequence of steps then use the `dsl.pipeline` decorator. Notice in the last line (i.e. `split_data_task`) that to get a particular artifact from a previous step, you will need to use the `outputs` dictionary and use the parameter name as the key.
###Code
@dsl.pipeline(
name="my-pipeline",
)
def my_pipeline(url: str):
download_data_task = download_data(url=url)
split_data_task = split_data(input_csv=download_data_task.outputs['output_csv'])
###Output
_____no_output_____
###Markdown
To generate your pipeline specification file, you need to compile your pipeline function using the [`Compiler`](https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.compiler.htmlkfp.compiler.Compiler) class as shown below.
###Code
kfp.compiler.Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(
pipeline_func=my_pipeline,
package_path='pipeline.yaml')
###Output
_____no_output_____
###Markdown
After running the cell, you'll see a `pipeline.yaml` file in the Colab file explorer. Please download that because it will be needed in the next step.You can run a pipeline programmatically or from the UI. For this exercise, you will do it from the UI and you will see how it is done programmatically in the Qwiklabs later this week. Please go back to the Kubeflow Pipelines UI and click `Upload Pipelines` from the `Pipelines` page.Next, select `Upload a file` and choose the `pipeline.yaml` you downloaded earlier then click `Create`. This will open a screen showing your simple DAG (just two tasks). Click `Create Run` then scroll to the bottom to input the URL of the Excel file: https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx . Then Click `Start`.Select the topmost entry in the `Runs` page and you should see the progress of your run. You can click on the `download-data` box to see more details about that particular task (i.e. the URL input and the container logs). After it turns green, you should also see the output artifact and you can download it if you want by clicking the minio link. Eventually, both tasks will turn green indicating that the run completed successfully. Nicely done! Generate the rest of the components Now that you've seen a sample workflow, you can build the rest of the components for preprocessing, model training, and model evaluation. The functions will be longer because the task is more complex. Nonetheless, it follows the same principles as before such as declaring inputs and outputs, and specifying the additional packages.In the `eval_model()` function, you'll notice the use of the [`log_metric()`](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.pyL123) to record the results. You'll see this in the `Visualizations` tab of that task after it has completed.
###Code
@component(
packages_to_install=["pandas", "numpy"],
output_component_file="preprocess_data_component.yaml"
)
def preprocess_data(input_train_csv: Input[Dataset], input_test_csv: Input[Dataset],
output_train_x: Output[Dataset], output_test_x: Output[Dataset],
output_train_y: Output[Artifact], output_test_y: Output[Artifact]):
import pandas as pd
import numpy as np
import pickle
def format_output(data):
y1 = data.pop('Y1')
y1 = np.array(y1)
y2 = data.pop('Y2')
y2 = np.array(y2)
return y1, y2
def norm(x, train_stats):
return (x - train_stats['mean']) / train_stats['std']
train = pd.read_csv(input_train_csv.path)
test = pd.read_csv(input_test_csv.path)
train_stats = train.describe()
# Get Y1 and Y2 as the 2 outputs and format them as np arrays
train_stats.pop('Y1')
train_stats.pop('Y2')
train_stats = train_stats.transpose()
train_Y = format_output(train)
with open(output_train_y.path, "wb") as file:
pickle.dump(train_Y, file)
test_Y = format_output(test)
with open(output_test_y.path, "wb") as file:
pickle.dump(test_Y, file)
# Normalize the training and test data
norm_train_X = norm(train, train_stats)
norm_test_X = norm(test, train_stats)
norm_train_X.to_csv(output_train_x.path, index=False)
norm_test_X.to_csv(output_test_x.path, index=False)
@component(
packages_to_install=["tensorflow", "pandas"],
output_component_file="train_model_component.yaml"
)
def train_model(input_train_x: Input[Dataset], input_train_y: Input[Artifact],
output_model: Output[Model], output_history: Output[Artifact]):
import pandas as pd
import tensorflow as tf
import pickle
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input
norm_train_X = pd.read_csv(input_train_x.path)
with open(input_train_y.path, "rb") as file:
train_Y = pickle.load(file)
def model_builder(train_X):
# Define model layers.
input_layer = Input(shape=(len(train_X.columns),))
first_dense = Dense(units='128', activation='relu')(input_layer)
second_dense = Dense(units='128', activation='relu')(first_dense)
# Y1 output will be fed directly from the second dense
y1_output = Dense(units='1', name='y1_output')(second_dense)
third_dense = Dense(units='64', activation='relu')(second_dense)
# Y2 output will come via the third dense
y2_output = Dense(units='1', name='y2_output')(third_dense)
# Define the model with the input layer and a list of output layers
model = Model(inputs=input_layer, outputs=[y1_output, y2_output])
print(model.summary())
return model
model = model_builder(norm_train_X)
# Specify the optimizer, and compile the model with loss functions for both outputs
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
model.compile(optimizer=optimizer,
loss={'y1_output': 'mse', 'y2_output': 'mse'},
metrics={'y1_output': tf.keras.metrics.RootMeanSquaredError(),
'y2_output': tf.keras.metrics.RootMeanSquaredError()})
# Train the model for 500 epochs
history = model.fit(norm_train_X, train_Y, epochs=100, batch_size=10)
model.save(output_model.path)
with open(output_history.path, "wb") as file:
train_Y = pickle.dump(history.history, file)
@component(
packages_to_install=["tensorflow", "pandas"],
output_component_file="eval_model_component.yaml"
)
def eval_model(input_model: Input[Model], input_history: Input[Artifact],
input_test_x: Input[Dataset], input_test_y: Input[Artifact],
MLPipeline_Metrics: Output[Metrics]):
import pandas as pd
import tensorflow as tf
import pickle
model = tf.keras.models.load_model(input_model.path)
norm_test_X = pd.read_csv(input_test_x.path)
with open(input_test_y.path, "rb") as file:
test_Y = pickle.load(file)
# Test the model and print loss and mse for both outputs
loss, Y1_loss, Y2_loss, Y1_rmse, Y2_rmse = model.evaluate(x=norm_test_X, y=test_Y)
print("Loss = {}, Y1_loss = {}, Y1_mse = {}, Y2_loss = {}, Y2_mse = {}".format(loss, Y1_loss, Y1_rmse, Y2_loss, Y2_rmse))
MLPipeline_Metrics.log_metric("loss", loss)
MLPipeline_Metrics.log_metric("Y1_loss", Y1_loss)
MLPipeline_Metrics.log_metric("Y2_loss", Y2_loss)
MLPipeline_Metrics.log_metric("Y1_rmse", Y1_rmse)
MLPipeline_Metrics.log_metric("Y2_rmse", Y2_rmse)
###Output
_____no_output_____
###Markdown
Build and run the complete pipeline You can then build and run the entire pipeline as you did earlier. It will take around 20 minutes for all the tasks to complete and you can see the `Logs` tab of each task to see how it's going. For instance, you can see there the model training epochs as you normally see in a notebook environment.
###Code
# Define a pipeline and create a task from a component:
@dsl.pipeline(
name="my-pipeline",
)
def my_pipeline(url: str):
download_data_task = download_data(url=url)
split_data_task = split_data(input_csv=download_data_task.outputs['output_csv'])
preprocess_data_task = preprocess_data(input_train_csv=split_data_task.outputs['train_csv'],
input_test_csv=split_data_task.outputs['test_csv'])
train_model_task = train_model(input_train_x=preprocess_data_task.outputs["output_train_x"],
input_train_y=preprocess_data_task.outputs["output_train_y"])
eval_model_task = eval_model(input_model=train_model_task.outputs["output_model"],
input_history=train_model_task.outputs["output_history"],
input_test_x=preprocess_data_task.outputs["output_test_x"],
input_test_y=preprocess_data_task.outputs["output_test_y"])
kfp.compiler.Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(
pipeline_func=my_pipeline,
package_path='pipeline.yaml')
###Output
_____no_output_____
###Markdown
Ungraded Lab: Building ML Pipelines with Kubeflow In this lab, you will have some hands-on practice with [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/pipelines-overview/). As mentioned in the lectures, modern ML engineering is moving towards pipeline automation for rapid iteration and experiment tracking. This is especially useful in production deployments where models need to be frequently retrained to catch trends in newer data.Kubeflow Pipelines is one component of the [Kubeflow](https://www.kubeflow.org/) suite of tools for machine learning workflows. It is deployed on top of a Kubernetes cluster and builds an infrastructure for orchestrating ML pipelines and monitoring inputs and outputs of each component. You will use this tool in Google Cloud Platform in the first assignment this week and this lab will help prepare you for that by exploring its features on a local deployment. In particular, you will:* setup [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/pipelines-overview/) in your local workstation* get familiar with the Kubeflow Pipelines UI* build pipeline components with Python and the Kubeflow Pipelines SDK* run an ML pipeline with Kubeflow PipelinesLet's begin! SetupYou will need these tool installed in your local machine to complete the exercises:1. Docker - platform for building and running containerized applications. You should already have this installed from the previous ungraded labs. If not, you can see the instructions [here](https://docs.docker.com/get-docker/). If you are using Docker for Desktop (Mac or Windows), you may need to increase the resource limits to start Kubeflow Pipelines later. You can click on the Docker icon in your Task Bar, choose `Preferences` and adjust the CPU to 4, Storage to 50GB, and the memory to at least 4GB (8GB recommended). Just make sure you are not maxing out any of these limits (i.e. the slider should ideally be at the midpoint or less) since it can make your machine slow or unresponsive. If you're constrained on resources, don't worry. You can still use this notebook as reference since we'll show the expected outputs at each step. The important thing is to become familiar with this Kubeflow Pipelines before you get more hands-on in the assignment. 2. kubectl - tool for running commands on Kubernetes clusters. This should also be installed from the previous labs. If not, please see the instructions [here](https://kubernetes.io/docs/tasks/tools/)3. [kind](https://kind.sigs.k8s.io/) - a Kubernetes distribution for running local clusters using Docker. Please follow the instructions [here](https://www.kubeflow.org/docs/components/pipelines/installation/localcluster-deployment/kind) to install kind and create a local cluster.4. Kubeflow Pipelines - a platform for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers. Once you've created a local cluster using kind, you can deploy Kubeflow Pipelines with these commands.```export PIPELINE_VERSION=1.7.0kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.iokubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/platform-agnostic-pns?ref=$PIPELINE_VERSION"```You can enter the commands above one line at a time. These will setup all the deployments and spin up the pods for the entire application. These will be found in the `kubeflow` namespace. After sending the last command, it will take a moment (around 30 minutes) for all the deployments to be ready. You can send the command `kubectl get deploy -n kubeflow` a few times to check the status. You should see all deployments with the `READY` status before you can proceed to the next section.```NAME READY UP-TO-DATE AVAILABLE AGEcache-deployer-deployment 1/1 1 1 21hcache-server 1/1 1 1 21hmetadata-envoy-deployment 1/1 1 1 21hmetadata-grpc-deployment 1/1 1 1 21hmetadata-writer 1/1 1 1 21hminio 1/1 1 1 21hml-pipeline 1/1 1 1 21hml-pipeline-persistenceagent 1/1 1 1 21hml-pipeline-scheduledworkflow 1/1 1 1 21hml-pipeline-ui 1/1 1 1 21hml-pipeline-viewer-crd 1/1 1 1 21hml-pipeline-visualizationserver 1/1 1 1 21hmysql 1/1 1 1 21hworkflow-controller 1/1 1 1 21h```When everything is ready, you can run the following command to access the `ml-pipeline-ui` service.```kubectl port-forward -n kubeflow svc/ml-pipeline-ui 8080:80```The terminal should respond with something like this:```Forwarding from 127.0.0.1:8080 -> 3000Forwarding from [::1]:8080 -> 3000```You can then open your browser and go to `http://localhost:8080` to see the user interface. Operationalizing your ML PipelinesAs you know, generating a trained model involves executing a sequence of steps. Here is a high level overview of what these steps might look like:You can recall the very first model you ever built and more likely than not, your code then also followed a similar flow. In essence, building an ML pipeline mainly involves implementing these steps but you will need to optimize your operations to deliver value to your team. Platforms such as Kubeflow helps you to build ML pipelines that can be automated, reproducible, and easily monitored. You will see these as you build your pipeline in the next sections below. Pipeline componentsThe main building blocks of your ML pipeline are referred to as [components](https://www.kubeflow.org/docs/components/pipelines/overview/concepts/component/). In the context of Kubeflow, these are containerized applications that run a specific task in the pipeline. Moreover, these components generate and consume *artifacts* from other components. For example, a download task will generate a dataset artifact and this will be consumed by a data splitting task. If you go back to the simple pipeline image above and describe it using tasks and artifacts, it will look something like this:This relationship between tasks and their artifacts are what constitutes a pipeline and is also called a [directed acyclic graph (DAG)](https://en.wikipedia.org/wiki/Directed_acyclic_graph).Kubeflow Pipelines let's you create components either by [building the component specification directly](https://www.kubeflow.org/docs/components/pipelines/sdk/component-development/component-spec) or through [Python functions](https://www.kubeflow.org/docs/components/pipelines/sdk/python-function-components/). For this lab, you will use the latter since it is more intuitive and allows for quick iteration. As you gain more experience, you can explore building the component specification directly especially if you want to use different languages other than Python.You will begin by installing the Kubeflow Pipelines SDK. Remember to restart the runtime to load the newly installed modules in Colab.
###Code
# Install the KFP SDK
!pip install --upgrade kfp
###Output
_____no_output_____
###Markdown
**Note:** *Please do not proceed to the next steps without restarting the Runtime after installing `kfp`. You can do that by either pressing the `Restart Runtime` button at the end of the cell output above, or going to the `Runtime` button at the Colab toolbar above and selecting `Restart Runtime`.* Now you will import the modules you will be using to construct the Kubeflow pipeline. You will know more what these are for in the next sections.
###Code
# Import the modules you will use
import kfp
# For creating the pipeline
from kfp.v2 import dsl
# For building components
from kfp.v2.dsl import component
# Type annotations for the component artifacts
from kfp.v2.dsl import (
Input,
Output,
Artifact,
Dataset,
Model,
Metrics
)
###Output
_____no_output_____
###Markdown
In this lab, you will build a pipeline to train a multi-output model trained on the [Energy Effeciency dataset from the UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Energy+efficiency). It uses the bulding features (e.g. wall area, roof area) as inputs and has two outputs: Cooling Load and Heating Load. You will follow the five-task graph above with some slight differences in the generated artifacts.You will now build the component to load your data into the pipeline. The code is shown below and we will discuss the syntax in more detail after running it.
###Code
@component(
packages_to_install=["pandas", "openpyxl"],
output_component_file="download_data_component.yaml"
)
def download_data(url:str, output_csv:Output[Dataset]):
import pandas as pd
# Use pandas excel reader
df = pd.read_excel(url)
df = df.sample(frac=1).reset_index(drop=True)
df.to_csv(output_csv.path, index=False)
###Output
_____no_output_____
###Markdown
When building a component, it's good to determine first its inputs and outputs.* The dataset you want to download is an Excel file hosted by UCI [here](https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx) and you can load that using Pandas. Instead of hardcoding the URL in your code, you can design your function to accept an *input* string parameter so you can use other URLs in case the data has been transferred. * For the *output*, you will want to pass the downloaded dataset to the next task (i.e. data splitting). You should assign this as an `Output` type and specify what kind of artifact it is. Kubeflow provides [several of these](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.py) such as `Dataset`, `Model`, `Metrics`, etc. All artifacts are saved by Kubeflow to a storage server. For local deployments, the default will be a [MinIO](https://min.io/) server. The [path](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.pyL51) property fetches the location where this artifact will be saved and that's what you did above when you called `df.to_csv(output_csv.path, index=False)`The inputs and outputs are declared as parameters in the function definition. As you can see in the code we defined a `url` parameter with a `str` type and an `output_csv` parameter with an `Output[Dataset]` type.Lastly, you'll need to use the `component` decorator to specify that this is a Kubeflow Pipeline component. The [documentation](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/component_decorator.pyL23) shows several parameters you can set and two of them are used in the code above. As the name suggests, the `packages_to_install` argument declares any extra packages outside the base image that is needed to run your code. As of writing, the default base image is `python:3.7` so you'll need `pandas` and `openpyxl` to load the Excel file. The `output_component_file` is an output file that contains the specification for your newly built component. You should see it in the Colab file explorer once you've ran the cell above. You'll see your code there and other settings that pertain to your component. You can use this file when building other pipelines if necessary. You don't have to redo your code again in a notebook in your next project as long as you have this YAML file. You can also pass this to your team members or use it in another machine. Kubeflow also hosts other reusable modules in their repo [here](https://github.com/kubeflow/pipelines/tree/master/components). For example, if you want a file downloader component in one of your projects, you can load the component from that repo using the [load_component_from_url](https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.htmlkfp.components.ComponentStore.load_component_from_url) function as shown below. The [YAML file](https://raw.githubusercontent.com/kubeflow/pipelines/master/components/web/Download/component-sdk-v2.yaml) of that component should tell you the inputs and outputs so you can use it accordingly.```web_downloader_op = kfp.components.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/master/components/web/Download/component-sdk-v2.yaml')``` Next, you will build the next component in the pipeline. Like in the previous step, you should design it first with inputs and outputs in mind. You know that the input of this component will come from the artifact generated by the `download_data()` function above. To declare input artifacts, you can annotate your parameter with the `Input[Dataset]` data type as shown below. For the outputs, you want to have two: train and test datasets. You can see the implementation below:
###Code
@component(
packages_to_install=["pandas", "sklearn"],
output_component_file="split_data_component.yaml"
)
def split_data(input_csv: Input[Dataset], train_csv: Output[Dataset], test_csv: Output[Dataset]):
import pandas as pd
from sklearn.model_selection import train_test_split
df = pd.read_csv(input_csv.path)
train, test = train_test_split(df, test_size=0.2)
train.to_csv(train_csv.path, index=False)
test.to_csv(test_csv.path, index=False)
###Output
_____no_output_____
###Markdown
Building and Running a Pipeline Now that you have at least two components, you can try building a pipeline just to quickly see how it works. The code is shown below. Basically, you just define a function with the sequence of steps then use the `dsl.pipeline` decorator. Notice in the last line (i.e. `split_data_task`) that to get a particular artifact from a previous step, you will need to use the `outputs` dictionary and use the parameter name as the key.
###Code
@dsl.pipeline(
name="my-pipeline",
)
def my_pipeline(url: str):
download_data_task = download_data(url=url)
split_data_task = split_data(input_csv=download_data_task.outputs['output_csv'])
###Output
_____no_output_____
###Markdown
To generate your pipeline specification file, you need to compile your pipeline function using the [`Compiler`](https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.compiler.htmlkfp.compiler.Compiler) class as shown below.
###Code
kfp.compiler.Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(
pipeline_func=my_pipeline,
package_path='pipeline.yaml')
###Output
_____no_output_____
###Markdown
After running the cell, you'll see a `pipeline.yaml` file in the Colab file explorer. Please download that because it will be needed in the next step.You can run a pipeline programmatically or from the UI. For this exercise, you will do it from the UI and you will see how it is done programmatically in the Qwiklabs later this week. Please go back to the Kubeflow Pipelines UI and click `Upload Pipelines` from the `Pipelines` page.Next, select `Upload a file` and choose the `pipeline.yaml` you downloaded earlier then click `Create`. This will open a screen showing your simple DAG (just two tasks). Click `Create Run` then scroll to the bottom to input the URL of the Excel file: https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx . Then Click `Start`.Select the topmost entry in the `Runs` page and you should see the progress of your run. You can click on the `download-data` box to see more details about that particular task (i.e. the URL input and the container logs). After it turns green, you should also see the output artifact and you can download it if you want by clicking the minio link. Eventually, both tasks will turn green indicating that the run completed successfully. Nicely done! Generate the rest of the components Now that you've seen a sample workflow, you can build the rest of the components for preprocessing, model training, and model evaluation. The functions will be longer because the task is more complex. Nonetheless, it follows the same principles as before such as declaring inputs and outputs, and specifying the additional packages.In the `eval_model()` function, you'll notice the use of the [`log_metric()`](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.pyL123) to record the results. You'll see this in the `Visualizations` tab of that task after it has completed.
###Code
@component(
packages_to_install=["pandas", "numpy"],
output_component_file="preprocess_data_component.yaml"
)
def preprocess_data(input_train_csv: Input[Dataset], input_test_csv: Input[Dataset],
output_train_x: Output[Dataset], output_test_x: Output[Dataset],
output_train_y: Output[Artifact], output_test_y: Output[Artifact]):
import pandas as pd
import numpy as np
import pickle
def format_output(data):
y1 = data.pop('Y1')
y1 = np.array(y1)
y2 = data.pop('Y2')
y2 = np.array(y2)
return y1, y2
def norm(x, train_stats):
return (x - train_stats['mean']) / train_stats['std']
train = pd.read_csv(input_train_csv.path)
test = pd.read_csv(input_test_csv.path)
train_stats = train.describe()
# Get Y1 and Y2 as the 2 outputs and format them as np arrays
train_stats.pop('Y1')
train_stats.pop('Y2')
train_stats = train_stats.transpose()
train_Y = format_output(train)
with open(output_train_y.path, "wb") as file:
pickle.dump(train_Y, file)
test_Y = format_output(test)
with open(output_test_y.path, "wb") as file:
pickle.dump(test_Y, file)
# Normalize the training and test data
norm_train_X = norm(train, train_stats)
norm_test_X = norm(test, train_stats)
norm_train_X.to_csv(output_train_x.path, index=False)
norm_test_X.to_csv(output_test_x.path, index=False)
@component(
packages_to_install=["tensorflow", "pandas"],
output_component_file="train_model_component.yaml"
)
def train_model(input_train_x: Input[Dataset], input_train_y: Input[Artifact],
output_model: Output[Model], output_history: Output[Artifact]):
import pandas as pd
import tensorflow as tf
import pickle
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input
norm_train_X = pd.read_csv(input_train_x.path)
with open(input_train_y.path, "rb") as file:
train_Y = pickle.load(file)
def model_builder(train_X):
# Define model layers.
input_layer = Input(shape=(len(train_X.columns),))
first_dense = Dense(units='128', activation='relu')(input_layer)
second_dense = Dense(units='128', activation='relu')(first_dense)
# Y1 output will be fed directly from the second dense
y1_output = Dense(units='1', name='y1_output')(second_dense)
third_dense = Dense(units='64', activation='relu')(second_dense)
# Y2 output will come via the third dense
y2_output = Dense(units='1', name='y2_output')(third_dense)
# Define the model with the input layer and a list of output layers
model = Model(inputs=input_layer, outputs=[y1_output, y2_output])
print(model.summary())
return model
model = model_builder(norm_train_X)
# Specify the optimizer, and compile the model with loss functions for both outputs
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
model.compile(optimizer=optimizer,
loss={'y1_output': 'mse', 'y2_output': 'mse'},
metrics={'y1_output': tf.keras.metrics.RootMeanSquaredError(),
'y2_output': tf.keras.metrics.RootMeanSquaredError()})
# Train the model for 500 epochs
history = model.fit(norm_train_X, train_Y, epochs=100, batch_size=10)
model.save(output_model.path)
with open(output_history.path, "wb") as file:
train_Y = pickle.dump(history.history, file)
@component(
packages_to_install=["tensorflow", "pandas"],
output_component_file="eval_model_component.yaml"
)
def eval_model(input_model: Input[Model], input_history: Input[Artifact],
input_test_x: Input[Dataset], input_test_y: Input[Artifact],
MLPipeline_Metrics: Output[Metrics]):
import pandas as pd
import tensorflow as tf
import pickle
model = tf.keras.models.load_model(input_model.path)
norm_test_X = pd.read_csv(input_test_x.path)
with open(input_test_y.path, "rb") as file:
test_Y = pickle.load(file)
# Test the model and print loss and mse for both outputs
loss, Y1_loss, Y2_loss, Y1_rmse, Y2_rmse = model.evaluate(x=norm_test_X, y=test_Y)
print("Loss = {}, Y1_loss = {}, Y1_mse = {}, Y2_loss = {}, Y2_mse = {}".format(loss, Y1_loss, Y1_rmse, Y2_loss, Y2_rmse))
MLPipeline_Metrics.log_metric("loss", loss)
MLPipeline_Metrics.log_metric("Y1_loss", Y1_loss)
MLPipeline_Metrics.log_metric("Y2_loss", Y2_loss)
MLPipeline_Metrics.log_metric("Y1_rmse", Y1_rmse)
MLPipeline_Metrics.log_metric("Y2_rmse", Y2_rmse)
###Output
_____no_output_____
###Markdown
Build and run the complete pipeline You can then build and run the entire pipeline as you did earlier. It will take around 20 minutes for all the tasks to complete and you can see the `Logs` tab of each task to see how it's going. For instance, you can see there the model training epochs as you normally see in a notebook environment.
###Code
# Define a pipeline and create a task from a component:
@dsl.pipeline(
name="my-pipeline",
)
def my_pipeline(url: str):
download_data_task = download_data(url=url)
split_data_task = split_data(input_csv=download_data_task.outputs['output_csv'])
preprocess_data_task = preprocess_data(input_train_csv=split_data_task.outputs['train_csv'],
input_test_csv=split_data_task.outputs['test_csv'])
train_model_task = train_model(input_train_x=preprocess_data_task.outputs["output_train_x"],
input_train_y=preprocess_data_task.outputs["output_train_y"])
eval_model_task = eval_model(input_model=train_model_task.outputs["output_model"],
input_history=train_model_task.outputs["output_history"],
input_test_x=preprocess_data_task.outputs["output_test_x"],
input_test_y=preprocess_data_task.outputs["output_test_y"])
kfp.compiler.Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(
pipeline_func=my_pipeline,
package_path='pipeline.yaml')
###Output
_____no_output_____ |
.ipynb_checkpoints/StevenSmiley-BreastCancer-ML-checkpoint.ipynb | ###Markdown
Using Machine Learning to Diagnose Breast Cancer in Python by: Steven Smiley Problem Statement:Find a Machine Learning (ML) model that accurately predicts breast cancer based on the 30 features described below. 1. Background:Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. n the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].This database is also available through the UW CS ftp server: ftp ftp.cs.wisc.edu cd math-prog/cpo-dataset/machine-learn/WDBC/Also can be found on UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29Attribute Information:1) ID number 2) Diagnosis (M = malignant, B = benign) 3-32)Ten real-valued features are computed for each cell nucleus:a) radius (mean of distances from center to points on the perimeter) b) texture (standard deviation of gray-scale values) c) perimeter d) area e) smoothness (local variation in radius lengths) f) compactness (perimeter^2 / area - 1.0) g) concavity (severity of concave portions of the contour) h) concave points (number of concave portions of the contour) i) symmetry j) fractal dimension ("coastline approximation" - 1)The mean, standard error and "worst" or largest (mean of the three largest values) of these features were computed for each image, resulting in 30 features. For instance, field 3 is Mean Radius, field 13 is Radius SE, field 23 is Worst Radius.All feature values are recoded with four significant digits.Missing attribute values: noneClass distribution: 357 benign, 212 malignant 2. Abstract: When it comes to diagnosing breast cancer, we want to make sure we don't have too many false positives (you have cancer, but told you dont) or false negatives (you don't have cancer, but told you do and go on treatments). Therefore, the highest overall accuracy model was chosen, which was the Gradient Boosted model. Several different models were evaluated through k-crossfold validation and GridSearchCV, which iterates on different algorithm's hyperparameters: * Logistic Regression * Support Vector Machine * Neural Network * Random Forest * Gradient Boost * eXtreme Gradient Boost All of the models performed well after fine tunning their hyperparameters, but the best model was the Gradient Boosted model as shown with an accuracy of ~97.4%. Out of the 20% of data witheld in this test (114 random individuals), only 3 were misdiagnosed. Two of which were misdiagnosed via False Positive, which means they had cancer, but told they didn't. One was misdiganosed via False Negative, which means they didn't have cancer, but told they did. No model is perfect, but I am happy about how accurate my model is here. If on average only 3 people out of 114 are misdiagnosed, that is a good start for making a model. Furthermore, the Feature Importance plots show that the "concave points mean" was by far the most significant feature to extract from a biopsy and should be taken each time if possible for predicting breast cancer. 3. Import Libraries
###Code
import warnings
import os # Get Current Directory
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import accuracy_score, precision_score, recall_score
import pandas as pd # data processing, CSV file I/O (e.i. pd.read_csv)
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import joblib
from time import time
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from xgboost import XGBClassifier
from sklearn.decomposition import PCA
from scipy import stats
import subprocess
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from sklearn.utils.multiclass import unique_labels
import itertools
###Output
_____no_output_____
###Markdown
3. Hide Warnings
###Code
warnings.filterwarnings("ignore")
pd.set_option('mode.chained_assignment', None)
###Output
_____no_output_____
###Markdown
4. Get Current Directory
###Code
currentDirectory=os.getcwd()
print(currentDirectory)
###Output
/Users/stevensmiley/Desktop/GraduateSchool/Python/PythonCodes/BreastCancer
###Markdown
5. Import and View Data
###Code
#data= pd.read_csv('/kaggle/input/breast-cancer-wisconsin-data/data.csv')
data=os.path.join(currentDirectory,'data.csv')
data= pd.read_csv(data)
data.head(10) # view the first 10 columns
###Output
_____no_output_____
###Markdown
5.1 Import and View Data: Check for Missing ValuesAs the background stated, no missing values should be present. The following verifies that. The last column doesn't hold any information and should be removed. In addition, the diagnosis should be changed to a binary classification of 0= benign and 1=malignant.
###Code
data.isnull().sum()
# Drop Unnamed: 32 variable that has NaN values.
data.drop(['Unnamed: 32'],axis=1,inplace=True)
# Convert Diagnosis for Cancer from Categorical Variable to Binary
diagnosis_num={'B':0,'M':1}
data['diagnosis']=data['diagnosis'].map(diagnosis_num)
# Verify Data Changes, look at first 5 rows
data.head(5)
###Output
_____no_output_____
###Markdown
6. Split Data for Training A good rule of thumb is to hold out 20 percent of the data for testing.
###Code
X = data.drop(['id','diagnosis'], axis= 1)
y = data.diagnosis
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.2, random_state= 42)
# Use Pandas DataFrame
X_train = pd.DataFrame(X_train)
X_test=pd.DataFrame(X_test)
y_train = pd.DataFrame(y_train)
y_test=pd.DataFrame(y_test)
tr_features=X_train
tr_labels=y_train
val_features = X_test
val_labels=y_test
###Output
_____no_output_____
###Markdown
Verify the data was split correctly
###Code
print('X_train - length:',len(X_train), 'y_train - length:',len(y_train))
print('X_test - length:',len(X_test),'y_test - length:',len(y_test))
print('Percent heldout for testing:', round(100*(len(X_test)/len(data)),0),'%')
###Output
X_train - length: 455 y_train - length: 455
X_test - length: 114 y_test - length: 114
Percent heldout for testing: 20.0 %
###Markdown
7. Machine Learning:In order to find a good model, several algorithms are tested on the training dataset. A senstivity study using different Hyperparameters of the algorithms are iterated on with GridSearchCV in order optimize each model. The best model is the one that has the highest accuracy without overfitting by looking at both the training data and the validation data results. Computer time does not appear to be an issue for these models, so it has little weight on deciding between models. GridSearch CVclass sklearn.model_selection.GridSearchCV(estimator, param_grid, scoring=None, n_jobs=None, iid='deprecated', refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', error_score=nan, return_train_score=False)[source]¶Exhaustive search over specified parameter values for an estimator.Important members are fit, predict.GridSearchCV implements a “fit” and a “score” method. It also implements “predict”, “predict_proba”, “decision_function”, “transform” and “inverse_transform” if they are implemented in the estimator used.The parameters of the estimator used to apply these methods are optimized by cross-validated grid-search over a parameter grid. Function: print_results
###Code
def print_results(results,name,filename_pr):
with open(filename_pr, mode='w') as file_object:
print(name,file=file_object)
print(name)
print('BEST PARAMS: {}\n'.format(results.best_params_),file=file_object)
print('BEST PARAMS: {}\n'.format(results.best_params_))
means = results.cv_results_['mean_test_score']
stds = results.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, results.cv_results_['params']):
print('{} {} (+/-{}) for {}'.format(name,round(mean, 3), round(std * 2, 3), params),file=file_object)
print('{} {} (+/-{}) for {}'.format(name,round(mean, 3), round(std * 2, 3), params))
print(GridSearchCV)
###Output
<class 'sklearn.model_selection._search.GridSearchCV'>
###Markdown
7.1 Machine Learning Models: Logistic Regression Logistic Regression: Hyperparameter used in GridSearchCV HP1, C: float, optional (default=1.0)Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization. DetailsRegularization is when a penality is applied with increasing value to prevent overfitting. The inverse of regularization strength means as the value of C goes up, the value of the regularization strength goes down and vice versa. Values chosen'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]
###Code
LR_model_dir=os.path.join(currentDirectory,'LR_model.pkl')
if os.path.exists(LR_model_dir) == False:
lr = LogisticRegression()
parameters = {
'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]
}
cv=GridSearchCV(lr, parameters, cv=5)
cv.fit(tr_features,tr_labels.values.ravel())
print_results(cv,'Logistic Regression (LR)','LR_GridSearchCV_results.txt')
cv.best_estimator_
LR_model_dir=os.path.join(currentDirectory,'LR_model.pkl')
joblib.dump(cv.best_estimator_,LR_model_dir)
else:
print('Already have LR')
###Output
Already have LR
###Markdown
7.2 Machine Learning Models: Support Vector Machine Support Vector Machine: Hyperparameter used in GridSearchCV HP1, kernelstring, optional (default=’rbf’)Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to pre-compute the kernel matrix from data matrices; that matrix should be an array of shape (n_samples, n_samples). DetailsA linear kernel type is good when the data is Linearly seperable, which means it can be separated by a single Line.A radial basis function (rbf) kernel type is an expontential function of the squared Euclidean distance between two vectors and a constant. Since the value of RBF kernel decreases with distance and ranges between zero and one, it has a ready interpretation as a similiarity measure. Values chosen'kernel': ['linear','rbf'] HP2, C: float, optional (default=1.0)Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty. DetailsRegularization is when a penality is applied with increasing value to prevent overfitting. The inverse of regularization strength means as the value of C goes up, the value of the regularization strength goes down and vice versa. Values chosen'C': [0.1, 1, 10]
###Code
print(SVC())
SVM_model_dir=os.path.join(currentDirectory,'SVM_model.pkl')
if os.path.exists(SVM_model_dir) == False:
svc = SVC()
parameters = {
'kernel': ['linear','rbf'],
'C': [0.1, 1, 10]
}
cv=GridSearchCV(svc,parameters, cv=5)
cv.fit(tr_features, tr_labels.values.ravel())
print_results(cv,'Support Vector Machine (SVM)','SVM_GridSearchCV_results.txt')
cv.best_estimator_
SVM_model_dir=os.path.join(currentDirectory,'SVM_model.pkl')
joblib.dump(cv.best_estimator_,SVM_model_dir)
else:
print('Already have SVM')
###Output
Already have SVM
###Markdown
7.3 Machine Learning Models: Neural Network Neural Network: (sklearn) Hyperparameter used in GridSearchCV HP1, hidden_layer_sizes: tuple, length = n_layers - 2, default (100,)The ith element represents the number of neurons in the ith hidden layer. DetailsA rule of thumb is (2/3)*( of input features) = neurons per hidden layer. Values chosen'hidden_layer_sizes': [(10,),(50,),(100,)] HP2, activation: {‘identity’, ‘logistic’, ‘tanh’, ‘relu’}, default ‘relu’Activation function for the hidden layer. Details* ‘identity’, no-op activation, useful to implement linear bottleneck, returns f(x) = x* ‘logistic’, the logistic sigmoid function, returns f(x) = 1 / (1 + exp(-x)).* ‘tanh’, the hyperbolic tan function, returns f(x) = tanh(x).* ‘relu’, the rectified linear unit function, returns f(x) = max(0, x) Values chosen'hidden_layer_sizes': [(10,),(50,),(100,)] HP3, learning_rate: {‘constant’, ‘invscaling’, ‘adaptive’}, default ‘constant’Learning rate schedule for weight updates. Details* ‘constant’ is a constant learning rate given by ‘learning_rate_init’.* ‘invscaling’ gradually decreases the learning rate at each time step ‘t’ using an inverse scaling exponent of ‘power_t’. effective_learning_rate = learning_rate_init / pow(t, power_t)* ‘adaptive’ keeps the learning rate constant to ‘learning_rate_init’ as long as training loss keeps decreasing. Each time two consecutive epochs fail to decrease training loss by at least tol, or fail to increase validation score by at least tol if ‘early_stopping’ is on, the current learning rate is divided by 5.Only used when solver='sgd'. Values chosen'learning_rate': ['constant','invscaling','adaptive']
###Code
print(MLPClassifier())
MLP_model_dir=os.path.join(currentDirectory,'MLP_model.pkl')
if os.path.exists(MLP_model_dir) == False:
mlp = MLPClassifier()
parameters = {
'hidden_layer_sizes': [(10,),(50,),(100,)],
'activation': ['relu','tanh','logistic'],
'learning_rate': ['constant','invscaling','adaptive']
}
cv=GridSearchCV(mlp, parameters, cv=5)
cv.fit(tr_features, tr_labels.values.ravel())
print_results(cv,'Neural Network (MLP)','MLP_GridSearchCV_results.txt')
cv.best_estimator_
MLP_model_dir=os.path.join(currentDirectory,'MLP_model.pkl')
joblib.dump(cv.best_estimator_,MLP_model_dir)
else:
print('Already have MLP')
###Output
Already have MLP
###Markdown
7.4 Machine Learning Models: Random Forest Random Forest: Hyperparameter used in GridSearchCV HP1, n_estimators: integer, optional (default=100)The number of trees in the forest.Changed in version 0.22: The default value of n_estimators changed from 10 to 100 in 0.22. DetailsUsually 500 does the trick and the accuracy and out of bag error doesn't change much after. Values chosen'n_estimators': [500], HP2, max_depth: integer or None, optional (default=None)The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. DetailsNone usually does the trick, but a few shallow trees are tested. Values chosen'max_depth': [5,7,9, None]
###Code
print(RandomForestClassifier())
RF_model_dir=os.path.join(currentDirectory,'RF_model.pkl')
if os.path.exists(RF_model_dir) == False:
rf = RandomForestClassifier(oob_score=False)
parameters = {
'n_estimators': [500],
'max_depth': [5,7,9, None]
}
cv = GridSearchCV(rf, parameters, cv=5)
cv.fit(tr_features, tr_labels.values.ravel())
print_results(cv,'Random Forest (RF)','RF_GridSearchCV_results.txt')
cv.best_estimator_
RF_model_dir=os.path.join(currentDirectory,'RF_model.pkl')
joblib.dump(cv.best_estimator_,RF_model_dir)
else:
print('Already have RF')
###Output
Already have RF
###Markdown
7.4 Machine Learning Models: Gradient Boosting Gradient Boosting: Hyperparameter used in GridSearchCV HP1, n_estimators: int (default=100)The number of boosting stages to perform. Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance. DetailsUsually 500 does the trick and the accuracy and out of bag error doesn't change much after. Values chosen'n_estimators': [5, 50, 250, 500], HP2, max_depth: integer, optional (default=3)maximum depth of the individual regression estimators. The maximum depth limits the number of nodes in the tree. Tune this parameter for best performance; the best value depends on the interaction of the input variables. DetailsA variety of shallow trees are tested. Values chosen'max_depth': [1, 3, 5, 7, 9], HP3, learning_rate: float, optional (default=0.1)learning rate shrinks the contribution of each tree by learning_rate. There is a trade-off between learning_rate and n_estimators. DetailsA variety was chosen because of the trade-off. Values chosen'learning_rate': [0.01, 0.1, 1]
###Code
print(GradientBoostingClassifier())
GB_model_dir=os.path.join(currentDirectory,'GB_model.pkl')
if os.path.exists(GB_model_dir) == False:
gb = GradientBoostingClassifier()
parameters = {
'n_estimators': [5, 50, 250, 500],
'max_depth': [1, 3, 5, 7, 9],
'learning_rate': [0.01, 0.1, 1]
}
cv=GridSearchCV(gb, parameters, cv=5)
cv.fit(tr_features, tr_labels.values.ravel())
print_results(cv,'Gradient Boost (GB)','GR_GridSearchCV_results.txt')
cv.best_estimator_
GB_model_dir=os.path.join(currentDirectory,'GB_model.pkl')
joblib.dump(cv.best_estimator_,GB_model_dir)
else:
print('Already have GB')
###Output
Already have GB
###Markdown
7.5 Machine Learning Models: eXtreme Gradient Boosting eXtreme Gradient Boosting: Hyperparameter used in GridSearchCV HP1, n_estimators: (int) – Number of trees to fit. DetailsUsually 500 does the trick and the accuracy and out of bag error doesn't change much after. Values chosen'n_estimators': [5, 50, 250, 500], HP2, max_depth: (int) – Maximum tree depth for base learners. DetailsA variety of shallow trees are tested. Values chosen'max_depth': [1, 3, 5, 7, 9], HP3, learning_rate: (float) – Boosting learning rate (xgb’s “eta”) DetailsA variety was chosen because of the trade-off. Values chosen'learning_rate': [0.01, 0.1, 1]
###Code
XGB_model_dir=os.path.join(currentDirectory,'XGB_model.pkl')
if os.path.exists(XGB_model_dir) == False:
xgb = XGBClassifier()
parameters = {
'n_estimators': [5, 50, 250, 500],
'max_depth': [1, 3, 5, 7, 9],
'learning_rate': [0.01, 0.1, 1]
}
cv=GridSearchCV(xgb, parameters, cv=5)
cv.fit(tr_features, tr_labels.values.ravel())
print_results(cv,'eXtreme Gradient Boost (XGB)','XGB_GridSearchCV_results.txt')
cv.best_estimator_
XGB_model_dir=os.path.join(currentDirectory,'XGB_model.pkl')
joblib.dump(cv.best_estimator_,XGB_model_dir)
else:
print('Already have XGB')
###Output
Already have XGB
###Markdown
8. Evaluate Models
###Code
## all models
models = {}
#for mdl in ['LR', 'SVM', 'MLP', 'RF', 'GB','XGB']:
for mdl in ['LR', 'SVM', 'MLP', 'RF', 'GB','XGB']:
model_path=os.path.join(currentDirectory,'{}_model.pkl')
models[mdl] = joblib.load(model_path.format(mdl))
###Output
_____no_output_____
###Markdown
Function: evaluate_model
###Code
def evaluate_model(name, model, features, labels, y_test_ev, fc):
start = time()
pred = model.predict(features)
end = time()
y_truth=y_test_ev
accuracy = round(accuracy_score(labels, pred), 3)
precision = round(precision_score(labels, pred), 3)
recall = round(recall_score(labels, pred), 3)
print('{} -- Accuracy: {} / Precision: {} / Recall: {} / Latency: {}ms'.format(name,
accuracy,
precision,
recall,
round((end - start)*1000, 1)))
pred=pd.DataFrame(pred)
pred.columns=['diagnosis']
# Convert Diagnosis for Cancer from Binary to Categorical
diagnosis_name={0:'Benign',1:'Malginant'}
y_truth['diagnosis']=y_truth['diagnosis'].map(diagnosis_name)
pred['diagnosis']=pred['diagnosis'].map(diagnosis_name)
class_names = ['Benign','Malginant']
cm = confusion_matrix(y_test_ev, pred, class_names)
FP_L='False Positive'
FP = cm[0][1]
#print(FP_L)
#print(FP)
FN_L='False Negative'
FN = cm[1][0]
#print(FN_L)
#print(FN)
TP_L='True Positive'
TP = cm[1][1]
#print(TP_L)
#print(TP)
TN_L='True Negative'
TN = cm[0][0]
#print(TN_L)
#print(TN)
#TPR_L= 'Sensitivity, hit rate, recall, or true positive rate'
TPR_L= 'Sensitivity'
TPR = round(TP/(TP+FN),3)
#print(TPR_L)
#print(TPR)
#TNR_L= 'Specificity or true negative rate'
TNR_L= 'Specificity'
TNR = round(TN/(TN+FP),3)
#print(TNR_L)
#print(TNR)
#PPV_L= 'Precision or positive predictive value'
PPV_L= 'Precision'
PPV = round(TP/(TP+FP),3)
#print(PPV_L)
#print(PPV)
#NPV_L= 'Negative predictive value'
NPV_L= 'NPV'
NPV = round(TN/(TN+FN),3)
#print(NPV_L)
#print(NPV)
#FPR_L= 'Fall out or false positive rate'
FPR_L= 'FPR'
FPR = round(FP/(FP+TN),3)
#print(FPR_L)
#print(FPR)
#FNR_L= 'False negative rate'
FNR_L= 'FNR'
FNR = round(FN/(TP+FN),3)
#print(FNR_L)
#print(FNR)
#FDR_L= 'False discovery rate'
FDR_L= 'FDR'
FDR = round(FP/(TP+FP),3)
#print(FDR_L)
#print(FDR)
ACC_L= 'Accuracy'
ACC = round((TP+TN)/(TP+FP+FN+TN),3)
#print(ACC_L)
#print(ACC)
stats_data = {'Name':name,
ACC_L:ACC,
FP_L:FP,
FN_L:FN,
TP_L:TP,
TN_L:TN,
TPR_L:TPR,
TNR_L:TNR,
PPV_L:PPV,
NPV_L:NPV,
FPR_L:FPR,
FNR_L:FDR}
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(cm,cmap=plt.cm.gray_r)
plt.title('Figure {}.A: {} Confusion Matrix on Unseen Test Data'.format(fc,name),y=1.08)
fig.colorbar(cax)
ax.set_xticklabels([''] + class_names)
ax.set_yticklabels([''] + class_names)
# Loop over data dimensions and create text annotations.
for i in range(len(class_names)):
for j in range(len(class_names)):
text = ax.text(j, i, cm[i, j],
ha="center", va="center", color="r")
plt.xlabel('Predicted')
plt.ylabel('True')
plt.savefig('Figure{}.A_{}_Confusion_Matrix.png'.format(fc,name),dpi=400,bbox_inches='tight')
#plt.show()
if name == 'RF' or name == 'GB' or name == 'XGB':
# Get numerical feature importances
importances = list(model.feature_importances_)
importances=100*(importances/max(importances))
feature_list = list(features.columns)
sorted_ID=np.argsort(importances)
plt.figure()
plt.barh(sort_list(feature_list,importances),importances[sorted_ID],align='center')
plt.title('Figure {}.B: {} Variable Importance Plot'.format(fc,name))
plt.xlabel('Relative Importance')
plt.ylabel('Feature')
plt.savefig('Figure{}.B_{}_Variable_Importance_Plot.png'.format(fc,name),dpi=300,bbox_inches='tight')
#plt.show()
return accuracy,name, model, stats_data
###Output
_____no_output_____
###Markdown
Function: sort_list
###Code
def sort_list(list1, list2):
zipped_pairs = zip(list2, list1)
z = [x for _, x in sorted(zipped_pairs)]
return z
###Output
_____no_output_____
###Markdown
Search for best model using test features
###Code
ev_accuracy=[None]*len(models)
ev_name=[None]*len(models)
ev_model=[None]*len(models)
ev_stats=[None]*len(models)
count=1
for name, mdl in models.items():
y_test_ev=y_test
ev_accuracy[count-1],ev_name[count-1],ev_model[count-1], ev_stats[count-1] = evaluate_model(name,mdl,val_features, val_labels, y_test_ev,count)
diagnosis_name={'Benign':0,'Malginant':1}
y_test['diagnosis']=y_test['diagnosis'].map(diagnosis_name)
count=count+1
best_name=ev_name[ev_accuracy.index(max(ev_accuracy))] #picks the maximum accuracy
print('Best Model:',best_name,'with Accuracy of ',max(ev_accuracy))
best_model=ev_model[ev_accuracy.index(max(ev_accuracy))] #picks the maximum accuracy
if best_name == 'RF' or best_name == 'GB' or best_name == 'XGB':
# Get numerical feature importances
importances = list(best_model.feature_importances_)
importances=100*(importances/max(importances))
feature_list = list(X.columns)
sorted_ID=np.argsort(importances)
plt.figure()
plt.barh(sort_list(feature_list,importances),importances[sorted_ID],align='center')
plt.title('Figure 7: Variable Importance Plot -- {}'.format(best_name))
plt.xlabel('Relative Importance')
plt.ylabel('Feature')
plt.savefig('Figure7.png',dpi=300,bbox_inches='tight')
plt.show()
###Output
Best Model: GB with Accuracy of 0.974
###Markdown
9. Conclusions When it comes to diagnosing breast cancer, we want to make sure we don't have too many false positives (you have cancer, but told you dont) or false negatives (you don't have cancer, but told you do and go on treatments). Therefore, the highest overall accuracy model is chosen. All of the models performed well after fine tunning their hyperparameters, but the best model was the Gradient Boosted model as shown with an accuracy of ~97.4%. Out of the 20% of data witheld in this test (114 random individuals), only 3 were misdiagnosed. Two of which were misdiagnosed via False Positive, which means they had cancer, but told they didn't. One was misdiganosed via False Negative, which means they didn't have cancer, but told they did. No model is perfect, but I am happy about how accurate my model is here. If on average only 3 people out of 114 are misdiagnosed, that is a good start for making a model. Furthermore, the Feature Importance plots show that the "concave points mean" was by far the most significant feature to extract from a biopsy and should be taken each time if possible for predicting breast cancer.
###Code
ev_stats=pd.DataFrame(ev_stats)
print(ev_stats.head(10))
###Output
Name Accuracy False Positive False Negative True Positive \
0 LR 0.965 2 2 41
1 SVM 0.939 3 4 39
2 MLP 0.965 3 1 42
3 RF 0.965 1 3 40
4 GB 0.974 2 1 42
5 XGB 0.956 2 3 40
True Negative Sensitivity Specificity Precision NPV FPR FNR
0 69 0.953 0.972 0.953 0.972 0.028 0.047
1 68 0.907 0.958 0.929 0.944 0.042 0.071
2 68 0.977 0.958 0.933 0.986 0.042 0.067
3 70 0.930 0.986 0.976 0.959 0.014 0.024
4 69 0.977 0.972 0.955 0.986 0.028 0.045
5 69 0.930 0.972 0.952 0.958 0.028 0.048
|
Informatics/Deep Learning/TensorFlow - deeplearning.ai/3. NLP/Course_3_Week_2_Lesson_3.ipynb | ###Markdown
###Code
# NOTE: PLEASE MAKE SURE YOU ARE RUNNING THIS IN A PYTHON3 ENVIRONMENT
import tensorflow as tf
print(tf.__version__)
# Double check TF 2.0x is installed. If you ran the above block, there was a
# 'reset all runtimes' button at the bottom that you needed to press
import tensorflow as tf
print(tf.__version__)
# If the import fails, run this
# !pip install -q tensorflow-datasets
import tensorflow.compat.v2 as tf
import tensorflow_datasets as tfds
imdb, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True, data_dir='./', download=False)
train_data, test_data = imdb['train'], imdb['test']
tokenizer = info.features['text'].encoder
print(tokenizer.subwords)
sample_string = 'TensorFlow, from basics to mastery'
tokenized_string = tokenizer.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))
original_string = tokenizer.decode(tokenized_string)
print ('The original string: {}'.format(original_string))
for ts in tokenized_string:
print ('{} ----> {}'.format(ts, tokenizer.decode([ts])))
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_data.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.padded_batch(BATCH_SIZE, tf.compat.v1.data.get_output_shapes(train_dataset))
test_dataset = test_data.padded_batch(BATCH_SIZE, tf.compat.v1.data.get_output_shapes(test_data))
embedding_dim = 64
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, embedding_dim),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(6, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.summary()
num_epochs = 10
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
history = model.fit(train_dataset, epochs=num_epochs, validation_data=test_dataset)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
# remember we are working with subwords, not words
e = model.layers[0]
weights = e.get_weights()[0]
print(weights.shape) # shape: (vocab_size, embedding_dim)
import io
out_v = io.open('vecs3.tsv', 'w', encoding='utf-8')
out_m = io.open('meta3.tsv', 'w', encoding='utf-8')
for word_num in range(1, tokenizer.vocab_size):
word = tokenizer.decode([word_num])
embeddings = weights[word_num]
out_m.write(word + "\n")
out_v.write('\t'.join([str(x) for x in embeddings]) + "\n")
out_v.close()
out_m.close()
try:
from google.colab import files
except ImportError:
pass
else:
files.download('vecs3.tsv')
files.download('meta3.tsv')
###Output
(8185, 64)
|
normal/data_process.ipynb | ###Markdown
图片与标注文件配对
###Code
# 在图片存在的情况下,标注文件不存在 则转移图片;反之亦然
def move_img(imgp, txtp, error):
if os.path.exists(imgp) and not os.path.exists(txtp):
shutil.move(imgp, error)
def check_image_label(image_paths, label_root):
for imgp in tqdm(image_paths):
name = os.path.basename(imgp)
txtp = os.path.join(label_root, name.replace('jpg', 'txt'))
move_img(imgp, txtp, error)
label_root = '/mnt/data/street/tricycle/'
image_root = '/mnt/data/street/tricycle/'
error = '/mnt/data/street/error/tricycle/'
# coco
# train_image_paths = glob(os.path.join(image_root, 'train2017/', '*.jpg'))
# val_image_paths = glob(os.path.join(image_root, 'val2017/', '*.jpg'))
# check_image_label(train_image_paths, os.path.join(label_root, 'train2017'))
# street
image_paths = glob(os.path.join(image_root, '*.jpg'))
check_image_label(image_paths, label_root)
###Output
100%|██████████| 546/546 [00:00<00:00, 50749.92it/s]
###Markdown
重命名爬取的数据并移动到相应位置
###Code
img_paths = glob('/mnt/data/street/motor/*.jpg') + \
glob('/mnt/data/street/shop/*.jpg') + \
glob('/mnt/data/street/trashbin/*.jpg') + \
glob('/mnt/data/street/tricycle/*.jpg')
train_img_paths, val_img_paths = train_test_split(img_paths, test_size=0.1, random_state=47)
train_txt_paths = [p.replace('jpg', 'txt') for p in train_img_paths]
val_txt_paths = [p.replace('jpg', 'txt') for p in val_img_paths]
for p in val_txt_paths:
shutil.copy(p, '/mnt/data/street_sub/labels/val/')
# for i, imgp in enumerate(img_paths):
# txtp = imgp.replace('jpg', 'txt')
# xmlp = imgp.replace('jpg', 'xml')
# root = os.path.dirname(imgp)
# new_profix = 'rz_' + str(i)
# new_img_name = new_profix + '.jpg'
# new_img_path = os.path.join(root, new_img_name)
# new_txt_path = new_img_path.replace('jpg', 'txt')
# new_xml_path = new_img_path.replace('jpg', 'xml')
# os.rename(imgp, os.path.join(root, new_img_path))
# os.rename(txtp, os.path.join(root, new_txt_path))
# os.rename(xmlp, os.path.join(root, new_xml_path))
###Output
_____no_output_____ |
exploring data more.ipynb | ###Markdown
See this tutorial http://www.dataperspective.info/2019/02/how-to-import-data-into-google-colab.html for info on how to import data from google drive.
###Code
import numpy as np
import random
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import os
!wget https://drive.google.com/drive/folders/1bY4gtwdeu7x9otU3eb_0BFXwaXrFY59C/z_first_ionization_z013.00_Hllfilter1_RHIIfilter1_RHImax50_200_300Mpc
fname = '1bY4gtwdeu7x9otU3eb_0BFXwaXrFY59C/z_first_ionization_z013.00_Hllfilter1_RHIIfilter1_RHImax50_200_300Mpc'
f = open(fname, "rb")
###Output
_____no_output_____ |
Python_Stock/Candlestick_Patterns/CandlestickExample.ipynb | ###Markdown
Candlestick Chart Example mplfinance https://github.com/matplotlib/mplfinance
###Code
import yfinance as yf
import mplfinance as mpf
import pandas as pd
import matplotlib.pyplot as plt
symbol = "NVDA"
start = "2020-12-01"
end = "2021-10-04"
data = yf.download(symbol, start=start, end=end)
mpf.plot(data,volume=True,type='candle',
#savefig=dict(fname=figsave("full"),dpi=1200)
title = symbol + " Candlestick Chart")
import datetime
x=datetime.datetime.now()
y=str(x.year-1)+'-'+str(x.strftime("%m"))
z=str(x.year)+'-'+str(int(x.strftime("%m"))-1)
#Last one year chart
s = mpf.make_mpf_style(base_mpf_style='charles',mavcolors=['#1f77b4','#ff7f0e','#2ca02c'])
mpf.plot(data[y:],
volume=True,
type='candle',
figratio=(24,10),
mav=(20,50,100),
style= s,
ylabel='Price (₹)',
title='Stock',
ylabel_lower='Traded\nVolume',
tight_layout=False,
#savefig=dict(fname=figsave("Year"),dpi=1200)
#show_nontrading=True if needed to show trading day gaps
)
s = mpf.make_mpf_style(base_mpf_style='charles',mavcolors=['#1f77b4','#ff7f0e','#2ca02c'])
mpf.plot(data[z:],
volume=True,
type='candle',
figratio=(24,10),
mav=(20,50,100),
style= s,
ylabel='Price (₹)',
title='Stock',
ylabel_lower='Traded\nVolume',
tight_layout=False,
#savefig=dict(fname=figsave("Month"),dpi=1200)
#show_nontrading=True if needed to show trading day gaps
)
###Output
_____no_output_____
###Markdown
TA-Lib Pattern Recognition Candlestick https://mrjbq7.github.io/ta-lib/func_groups/pattern_recognition.html
###Code
import talib
df = yf.download("NVDA", start="2020-12-01", end="2021-10-04")
# Get Morning star pattern analysis
df['Morningstars'] = talib.CDLMORNINGSTAR(data['Open'], data['High'], data['Low'], data['Close'])
df.loc[df['Morningstars'] !=0]
df[df["Morningstars"] != 0]
df['Adj Close'].loc[df["Morningstars"] != 0]
df['Adj Close'].loc[df["Morningstars"] != 0].index
morning_stars = df['Morningstars']
morning_stars[morning_stars !=0]
morning_stars[morning_stars !=0].index
###Output
_____no_output_____
###Markdown
Create Candlestick Chart using mplfinance
###Code
from mplfinance.original_flavor import candlestick_ohlc
from matplotlib import dates as mdates
import datetime as dt
# input
symbol = 'AMD'
start = '2019-01-01'
end = '2020-01-01'
# Read data
df = yf.download(symbol,start,end)
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = pd.to_datetime(dfc['Date'])
dfc['Date'] = dfc['Date'].apply(mdates.date2num)
dfc.head()
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(2, 1, 1)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax1.grid(True, which='both')
ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
###Output
_____no_output_____
###Markdown
Different types of background and style
###Code
mpf.available_styles()
s = mpf.make_mpf_style(base_mpf_style='nightclouds', rc={'font.size': 6})
fig = mpf.figure(figsize=(14,10), style=s)
ax1 = plt.subplot(2, 1, 1)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
#ax1.grid(True, which='both')
#ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
s = mpf.make_mpf_style(base_mpf_style='blueskies', rc={'font.size': 12})
fig = mpf.figure(figsize=(14,10), style=s)
ax = plt.subplot(2, 1, 1)
av = fig.add_subplot(2,1,2, sharex=ax)
mpf.plot(df, type='candle', volume=av, ax=ax, ylabel = 'Prices')
ax.set_title('Stock '+ symbol +' Closing Price')
# Trim volume to avoid exponential form
df['Volume'] = df['Volume'] / 1000
# Create MACD
df["macd"], df["macd_signal"], df["macd_hist"] = talib.MACD(df['Close'])
# macd panel
colors = ['g' if v >= 0 else 'r' for v in df["macd_hist"]]
macd_hist_plot = mpf.make_addplot(df["macd_hist"], type='bar', panel=1, color=colors)
# Plot
mpf.plot(df, type='candle', style='yahoo', addplot=macd_hist_plot, title='Stock '+ symbol +' Closing Price', ylabel='')
mpf.plot(df, type='candle', style='yahoo', figratio=(12,6) ,addplot=macd_hist_plot, title='Stock '+ symbol +' Closing Price', ylabel='')
###Output
_____no_output_____ |
AppStat2022/Week2/ExampleSolutions/LikelihoodFit/LikelihoodFit_ExampleSolution.ipynb | ###Markdown
Principle of Maximum Likelihood Description:Python script for illustrating the principle of maximum likelihood and a likelihood fit.__This is both an exercise, but also an attempt to illustrate four things:__ 1. How to make a (binned and unbinned) Likelihood function/fit. 2. The difference and a comparison between a Chi-square and a (binned) Likelihood. 3. The difference and a comparison between a binned and unbinned Likelihood. 4. What goes on behind the scenes in Minuit, when it is asked to fit something.In this respect, the exercise is more of an illustration rather than something to be used directly, which is why it is followed later by another exercise, where you can test if you have understood the differences, and how and when to apply which fit method.The example uses 50 exponentially distributed random times, with the goal of finding the best estimate of the lifetime (data is generated with lifetime, tau = 1). Three estimates are considered: 1. Chi-square fit (chi2) 2. Binned Likelihood fit (bllh) 3. Unbinned Likelihood fit (ullh)The three methods are based on a scan of values for tau in the range [0.5, 2.0]. For each value of tau, the chi2, bllh, and ullh are calculated. In the two likelihood cases, it is actually -2*log(likelihood) which is calculated, which you should (by now) understand why. Note that the unbinned likelihood is in principle the "optimal" fit, but also the most difficult for several reasons (convergence, numerical problems, implementation, speed, etc.). However, all three methods/constructions essentially yield the same results, when there is enough statistics (i.e. errors are Gaussian), though the $\chi^2$ also gives a fit quality. The problem is explicitly chosen to have only one fit parameter, such that simple 1D graphs can show what goes on. In this case, the analytical solution (simple mean) is actually prefered (see Barlow). Real world problems will almost surely be more complex.Also, the exercise is mostly for illustration. In reality, one would hardly ever calculate and plot the Chi-square or Likelihood values, but rather do the minimization using an algorithm (Minuit) to do the hard work. Authors: - Troels C. Petersen (Niels Bohr Institute, [email protected])- Étienne Bourbeau ([email protected]) Date: - 26-11-2021 (latest update) Reference:- Barlow, chapter 5 (5.1-5.7)- Cowan, chapter 6***
###Code
import numpy as np # Matlab like syntax for linear algebra and functions
import matplotlib.pyplot as plt # Plots and figures like you know them from Matlab
import seaborn as sns # Make the plots nicer to look at
from iminuit import Minuit # The actual fitting tool, better than scipy's
import sys # Module to see files and folders in directories
from scipy import stats
sys.path.append('../../../External_Functions')
from ExternalFunctions import Chi2Regression, BinnedLH, UnbinnedLH
from ExternalFunctions import nice_string_output, add_text_to_ax # useful functions to print fit results on figure
plt.rcParams['font.size'] = 16 # set some basic plotting parameters
###Output
_____no_output_____
###Markdown
Program settings:
###Code
save_plots = False # Determining if plots are saved or not
verbose = True # Should the program print or not?
veryverbose = True # Should the program print a lot or not?
ScanChi2 = True # In addition to fit for minimum, do a scan...
# Parameters of the problem:
Ntimes = 50 # Number of time measurements.
tau_truth = 1.0; # We choose (like Gods!) the lifetime.
# Binning:
Nbins = 50 # Number of bins in histogram
tmax = 10.0 # Maximum time in histogram
binwidth = tmax / Nbins # Size of bins (s)
# General settings:
r = np.random # Random numbers
r.seed(42) # We set the numbers to be random, but the same for each run
###Output
_____no_output_____
###Markdown
Generate data:
###Code
# Produce array of exponentially distributed times and put them in a histogram:
t = r.exponential(tau_truth, Ntimes) # Exponential with lifetime tau.
yExp, xExp_edges = np.histogram(t, bins=Nbins, range=(0, tmax))
###Output
_____no_output_____
###Markdown
Is the data plotted like we wouls like to? Let's check...
###Code
# In case you want to check that the numbers really come out as you want to (very healthy to do at first):
if (veryverbose) :
for index, time in enumerate(t) :
print(f" {index:2d}: t = {time:5.3f}")
if index > 10:
break # let's restrain ourselves
###Output
0: t = 0.469
1: t = 3.010
2: t = 1.317
3: t = 0.913
4: t = 0.170
5: t = 0.170
6: t = 0.060
7: t = 2.011
8: t = 0.919
9: t = 1.231
10: t = 0.021
11: t = 3.504
###Markdown
Looks like values are coming int, but are they actually giving an exponential? Remember the importance of __plotting your data before hand__!
###Code
X_center = xExp_edges[:-1] + (xExp_edges[1]-xExp_edges[0])/2.0 # Get the value of the histogram bin centers
plt.plot(X_center,yExp,'o')
plt.show()
###Output
_____no_output_____
###Markdown
Check that it looks like you are producing the data that you want. If this is the case, move on (and possibly comment out the plot!). Analyse data:The following is "a manual fit", i.e. scanning over possible values of the fitting parameter(s) - here luckely only one, tau - and seeing what value of chi2, bllh, and ullh it yields. When plotting these, one should find a parabola, the minimum value of which is the optimal fitting parameter of tau. The rate of increase around this minimum represents the uncertainty of the fitting parameter.
###Code
# Define the number of tau values and their range to test in Chi2 and LLH:
# As we know the "truth", namely tau = 1, the range [0.5, 1.5] seems fitting for the mean.
# The number of bins can be increased at will, but for now 50 seems fitting.
Ntau_steps = 50
min_tau = 0.5
max_tau = 1.5
delta_tau = (max_tau-min_tau) / Ntau_steps
# Loop over hypothesis for the value of tau and calculate Chi2 and (B)LLH:
chi2_minval = 999999.9 # Minimal Chi2 value found
chi2_minpos = 0.0 # Position (i.e. time) of minimal Chi2 value
bllh_minval = 999999.9
bllh_minpos = 0.0
ullh_minval = 999999.9
ullh_minpos = 0.0
tau = np.zeros(Ntau_steps+1)
chi2 = np.zeros(Ntau_steps+1)
bllh = np.zeros(Ntau_steps+1)
ullh = np.zeros(Ntau_steps+1)
# Now loop of POSSIBLE tau estimates:
for itau in range(Ntau_steps+1):
tau_hypo = min_tau + itau*delta_tau # Scan in values of tau
tau[itau] = tau_hypo
# Calculate Chi2 and binned likelihood (from loop over bins in histogram):
chi2[itau] = 0.0
bllh[itau] = 0.0
for ibin in range (Nbins) :
# Note: The number of EXPECTED events is the intergral over the bin!
xlow_bin = xExp_edges[ibin]
xhigh_bin = xExp_edges[ibin+1]
# Given the start and end of the bin, we calculate the INTEGRAL over the bin,
# to get the expected number of events in that bin:
nexp = Ntimes * (np.exp(-xlow_bin/tau_hypo) - np.exp(-xhigh_bin/tau_hypo))
# The observed number of events... that is just the data!
nobs = yExp[ibin]
if (nobs > 0): # For ChiSquare but not LLH, we need to require Nobs > 0, as we divide by this:
chi2[itau] += (nobs-nexp)**2 / nobs # Chi2 summation/function
bllh[itau] += -2.0*np.log(stats.poisson.pmf(int(nobs), nexp)) # Binned LLH function
if (veryverbose and itau == 0) :
print(f" Nexp: {nexp:10.7f} Nobs: {nobs:3.0f} Chi2: {chi2[itau]:5.1f} BLLH: {bllh[itau]:5.1f}")
# Calculate Unbinned likelihood (from loop over events):
ullh[itau] = 0.0
for time in t : # i.e. for every data point generated...
ullh[itau] += -2.0*np.log(1.0/tau_hypo*np.exp(-time/tau_hypo)) # Unbinned LLH function
if (verbose) :
print(f" {itau:3d}: tau = {tau_hypo:4.2f} chi2 = {chi2[itau]:6.2f} log(bllh) = {bllh[itau]:6.2f} log(ullh) = {ullh[itau]:6.2f}")
# Search for minimum values of chi2, bllh, and ullh:
if (chi2[itau] < chi2_minval) :
chi2_minval = chi2[itau]
chi2_minpos = tau_hypo
if (bllh[itau] < bllh_minval) :
bllh_minval = bllh[itau]
bllh_minpos = tau_hypo
if (ullh[itau] < ullh_minval) :
ullh_minval = ullh[itau]
ullh_minpos = tau_hypo
print(f" Decay time of minimum found: chi2: {chi2_minpos:7.4f}s bllh: {bllh_minpos:7.4f}s ullh: {ullh_minpos:7.4f}s")
print(f" Chi2 value at minimum: chi2 = {chi2_minval:.1f}")
###Output
Chi2 value at minimum: chi2 = 6.8
###Markdown
Plot and fit results:
###Code
# Define range around minimum to be fitted:
min_fit = 0.15
max_fit = 0.20
fig, axes = plt.subplots(2, 2, figsize=(16, 12))
ax_chi2 = axes[0,0]
ax_bllh = axes[1,0]
ax_ullh = axes[0,1]
# A fourth plot is available for plotting whatever you want :)
# ChiSquare:
# ----------
ax_chi2.plot(tau, chi2, 'k.', label='chi2')
ax_chi2.set_xlim(chi2_minpos-2*min_fit, chi2_minpos+2*max_fit)
ax_chi2.set_title("ChiSquare")
ax_chi2.set_xlabel(r"Value of $\tau$")
ax_chi2.set_ylabel("Value of ChiSquare")
# Binned Likelihood:
# ----------
ax_bllh.plot(tau, bllh,'bo')
ax_bllh.set_xlim(bllh_minpos-2*min_fit, bllh_minpos+2*max_fit)
ax_bllh.set_title("Binned Likelihood")
ax_bllh.set_xlabel(r"Value of $\tau$")
ax_bllh.set_ylabel(r"Value of $\ln{LLH}$")
# Unbinned Likelihood:
# ----------
ax_ullh.plot(tau, ullh, 'g.')
ax_ullh.set_xlim(ullh_minpos-2*min_fit, ullh_minpos+2*max_fit)
ax_ullh.set_title("Unbinned Likelihood")
ax_ullh.set_xlabel(r"Value of $\tau$")
ax_ullh.set_ylabel(r"Value of $\ln{LLH}$")
fig;
###Output
_____no_output_____
###Markdown
--- Parabola functionNote that the parabola is defined differently than normally. The parameters are: * `minval`: Minimum value (i.e. constant) * `minpos`: Minimum position (i.e. x of minimum) * `quadratic`: Quadratic term.
###Code
def func_para(x, minval, minpos, quadratic) :
return minval + quadratic*(x-minpos)**2
func_para_vec = np.vectorize(func_para) # Note: This line makes it possible to send vectors through the function!
###Output
_____no_output_____
###Markdown
--- Double parabola with different slopes on each side of the minimum:In case the uncertainties are asymmetric, the parabola will also be so, and hence needs to be fitted with two separate parabolas meeting at the top point. Parameters are now as follows: * `minval`: Minimum value (i.e. constant) * `minpos`: Minimum position (i.e. x of minimum) * `quadlow`: Quadratic term on lower side * `quadhigh`: Quadratic term on higher side
###Code
def func_asympara(x, minval, minpos, quadlow, quadhigh) :
if (x < minpos) :
return minval + quadlow*(x-minpos)**2
else :
return minval + quadhigh*(x-minpos)**2
func_asympara_vec = np.vectorize(func_asympara) # Note: This line makes it possible to send vectors through the function!
###Output
_____no_output_____
###Markdown
Perform both fits:
###Code
# Fit chi2 values with our parabola:
indexes = (tau>chi2_minpos-min_fit) & (tau<chi2_minpos+max_fit)
# Fit with parabola:
chi2_object_chi2 = Chi2Regression(func_para, tau[indexes], chi2[indexes])
minuit_chi2 = Minuit(chi2_object_chi2, minval=chi2_minval, minpos=chi2_minpos, quadratic=20.0)
minuit_chi2.errordef = 1.0
minuit_chi2.migrad()
# Fit with double parabola:
chi2_object_chi2_doublep = Chi2Regression(func_asympara, tau[indexes], chi2[indexes])
minuit_chi2_doublep = Minuit(chi2_object_chi2_doublep, minval=chi2_minval, minpos=chi2_minpos, quadlow=20.0, quadhigh=20.0)
minuit_chi2_doublep.errordef = 1.0
minuit_chi2_doublep.migrad();
# Plot (simple) fit:
minval, minpos, quadratic = minuit_chi2.values # Note how one can "extract" the three values from the object.
print(minval)
minval_2p, minpos_2p, quadlow_2p, quadhigh_2p = minuit_chi2_doublep.values
print(minval_2p)
x_fit = np.linspace(chi2_minpos-min_fit, chi2_minpos+max_fit, 1000)
y_fit_simple = func_para_vec(x_fit, minval, minpos, quadratic)
ax_chi2.plot(x_fit, y_fit_simple, 'b-')
d = {'Chi2 value': minval,
'Fitted tau (s)': minpos,
'quadratic': quadratic}
text = nice_string_output(d, extra_spacing=3, decimals=3)
add_text_to_ax(0.02, 0.95, text, ax_chi2, fontsize=14)
fig.tight_layout()
if save_plots:
fig.savefig("FitMinimum.pdf", dpi=600)
fig
# Given the parabolic fit, we can now extract the uncertainty on tau (think about why the below formula works!):
err = 1.0 / np.sqrt(quadratic)
# For comparison, I give one extra decimal, than I would normally do:
print(f" Chi2 fit gives: tau = {minpos:.3f} +- {err:.3f}")
# For the asymmetric case, there are naturally two errors to calculate.
#err_lower = 1.0 / np.sqrt(quadlow)
#err_upper = 1.0 / np.sqrt(quadhigh)
# Go through tau values to find minimum and +-1 sigma:
# This assumes knowing the minimum value, and Chi2s above Chi2_min+1
if (ScanChi2) :
if (((chi2[0] - chi2_minval) > 1.0) and ((chi2[Ntau_steps] - chi2_minval) > 1.0)) :
found_lower = False
found_upper = False
for itau in range (Ntau_steps+1) :
if ((not found_lower) and ((chi2[itau] - chi2_minval) < 1.0)) :
tau_lower = tau[itau]
found_lower = True
if ((found_lower) and (not found_upper) and ((chi2[itau] - chi2_minval) > 1.0)) :
tau_upper = tau[itau]
found_upper = True
print(f" Chi2 scan gives: tau = {chi2_minpos:6.4f} + {tau_upper-chi2_minpos:6.4f} - {chi2_minpos-tau_lower:6.4f}")
else :
print(f" Error: Chi2 values do not fulfill requirements for finding minimum and errors!")
###Output
Chi2 scan gives: tau = 0.8600 + 0.3200 - 0.1800
###Markdown
Discussion:One could here of course have chosen a finer binning, but that is still not very satisfactory, and in any case very slow. That is why we of course want to use e.g. iMinuit to perform the fit, and extract all the relevant fitting parameters in a nice, fast, numerically stable, etc. way. --- Fit the data using iminuit (both chi2 and binned likelihood fits)Now we want to see, what a "real" fit gives, in order to compare our result with the one provided by Minuit.
###Code
# Define the function to fit with:
def func_exp(x, N0, tau) :
return N0 * binwidth / tau * np.exp(-x/tau)
# Define the function to fit with:
def func_exp2(x, tau) :
return Ntimes * binwidth / tau * np.exp(-x/tau)
###Output
_____no_output_____
###Markdown
$\chi^2$ fit:
###Code
# Prepare figure
fig_fit, ax_fit = plt.subplots(figsize=(8, 6))
ax_fit.set_title("tau values directly fitted with iminuit")
ax_fit.set_xlabel("Lifetimes [s]")
ax_fit.set_ylabel("Frequency [ev/0.1s]")
# Plot our tau values
indexes = yExp>0 # only bins with values!
xExp = (xExp_edges[1:] + xExp_edges[:-1])/2 # Move from bins edges to bin centers
syExp = np.sqrt(yExp) # Uncertainties
ax_fit.errorbar(xExp[indexes], yExp[indexes], syExp[indexes], fmt='k_', ecolor='k', elinewidth=1, capsize=2, capthick=1)
# Chisquare-fit tau values with our function:
chi2_object_fit = Chi2Regression(func_exp, xExp[indexes], yExp[indexes], syExp[indexes])
# NOTE: The constant for normalization is NOT left free in order to have only ONE parameter!
minuit_fit_chi2 = Minuit(chi2_object_fit, N0=Ntimes, tau=tau_truth)
minuit_fit_chi2.fixed["N0"] = True
minuit_fit_chi2.errordef = 1.0
minuit_fit_chi2.migrad()
# Plot fit
x_fit = np.linspace(0, 10, 1000)
y_fit_simple = func_exp(x_fit, *minuit_fit_chi2.values)
ax_fit.plot(x_fit, y_fit_simple, 'b-', label="ChiSquare fit")
# Print the obtained fit results:
# print(minuit_fit_chi2.values["tau"], minuit_fit_chi2.errors["tau"])
tau_fit = minuit_fit_chi2.values["tau"]
etau_fit = minuit_fit_chi2.errors["tau"]
print(f" Decay time of minimum found: chi2: {tau_fit:.3f} +- {etau_fit:.3f}s")
print(f" Chi2 value at minimum: chi2 = {minuit_fit_chi2.fval:.1f}")
# Alternatively to the above, one can in iMinuit actually ask for the Chi2 curve to be plotted by one command:
minuit_fit_chi2.draw_mnprofile('tau')
###Output
_____no_output_____
###Markdown
--- Binned likelihood fit:Below is an example of a binned likelihood fit. Try to write an unbinned likelihood fit yourself!
###Code
# Binned likelihood-fit tau values with our function
# extended=True because we have our own normalization in our fit function
bllh_object_fit = BinnedLH(func_exp2, t, bins=Nbins, bound=(0, tmax), extended=True)
minuit_fit_bllh = Minuit(bllh_object_fit, tau=tau_truth)
minuit_fit_bllh.errordef = 0.5 # Value for likelihood fit
minuit_fit_bllh.migrad()
# Plot fit
x_fit = np.linspace(0, 10, 1000)
y_fit_simple = func_exp2(x_fit, *minuit_fit_bllh.values[:])
ax_fit.plot(x_fit, y_fit_simple, 'r-', label="Binned Likelihood fit")
# Define the ranges:
ax_fit.set_xlim(0, 5)
ax_fit.set_ylim(bottom=0) # We don't want to see values below this!
fig_fit.legend(loc=[0.45, 0.75])
fig_fit.tight_layout()
fig_fit
if (save_plots) :
fig_fit.savefig("ExponentialDist_Fitted.pdf", dpi=600)
###Output
_____no_output_____ |
Assignment_4(Ensemble_Learning).ipynb | ###Markdown
Import Libraries
###Code
import numpy as np
from sklearn.base import clone
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.datasets import make_circles
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import StackingClassifier
def plotDataset(X, y):
for label in np.unique(y):
plt.scatter(X[y == label, 0], X[y == label, 1], label=label)
plt.legend()
plt.show()
def plotEstimator(trX, trY, teX, teY, estimator, title=''):
estimator = clone(estimator).fit(trX, trY)
h = .02
x_min, x_max = teX[:, 0].min() - .5, teX[:, 0].max() + .5
y_min, y_max = teX[:, 1].min() - .5, teX[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
Z = estimator.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=cm, alpha=0.8)
plt.scatter(teX[:, 0], teX[:, 1], c=teY, cmap=cm_bright, edgecolors='k', alpha=0.6)
#plt.legend()
plt.title(title)
plt.show()
###Output
_____no_output_____
###Markdown
Data Sets Circle dataset
###Code
rs = 0
X, y = make_circles(300, noise=0.1, random_state=rs)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,random_state=rs)
plotDataset(X,y)
###Output
_____no_output_____
###Markdown
Classification dataset
###Code
rs = 0
X2, y2 = make_classification(300, random_state=rs)
X_train2, X_test2, y_train2, y_test2 = train_test_split(X2, y2, test_size=0.2,random_state=rs)
plotDataset(X2,y2)
###Output
_____no_output_____
###Markdown
Decision Tree **(4)** Use Circle Dataset. Apply decision tree on the Circle Dataset, set criterion as gini and entropy, get the accuracy of the testing results, plot the decision boundaries and explain the difference between these criterion (4.1) DT with gini Index
###Code
dtEstimator_gini = DecisionTreeClassifier(criterion="gini")
dtEstimator_gini.fit(X_train, y_train)
predY = dtEstimator_gini.predict(X_test)
dtAccuracy = accuracy_score(y_test, predY)
print("test accuracy is: ",round(dtAccuracy,3))
plotEstimator(X_train, y_train, X_test, y_test, dtEstimator_gini, 'Decision Tree with gini index')
###Output
test accuracy is: 0.6
###Markdown
(4.2) DT with entropy
###Code
dtEstimator_entropy = DecisionTreeClassifier(criterion="entropy")
dtEstimator_entropy.fit(X_train, y_train)
predY = dtEstimator_entropy.predict(X_test)
dtAccuracy = accuracy_score(y_test, predY)
print("test accuracy is: ",round(dtAccuracy,3))
plotEstimator(X_train, y_train, X_test, y_test, dtEstimator_entropy, 'Decision Tree with entropy')
###Output
test accuracy is: 0.717
###Markdown
Gini measurement is the probability of a random sample being classified incorrectly if we randomly pick a label according to the distribution in a branch.Entropy is a measurement of information (or rather lack thereof). You calculate the information gain by making a split. Which is the difference in entripies. This measures how you reduce the uncertainty about the label. **(5)** Use Classification Dataset. Use training set to obtain the importance of features. Plot Validation Accuracy (y-axis) vs Top K Important Feature (x-axis) curve; where 4-fold cross validation should be used, and also plot Test Accuracy vs Top K Important Feature curve
###Code
def plot_importance_vs_accuracys(values, axis_values, title):
plt.figure(figsize=(8,5))
if len(axis_values) == 4:
axis_1 = plt.plot(values, axis_values[0], color='red', marker='*',
linestyle='-', label = '1st fold')
axis_2 = plt.plot(values, axis_values[1], color='green', marker='*',
linestyle='-', label = '2nd fold')
axis_3 = plt.plot(values, axis_values[2], color='blue', marker='*',
linestyle='-', label = '3rd fold')
axis_4 = plt.plot(values, axis_values[3], color='yellow', marker='*',
linestyle='-', label = '4th fold')
plt.title(title)
plt.xlabel('Top K Important Features')
plt.ylabel('Validation Accuracy')
plt.xticks([x for x in range(len(values))])
y_ticks = [x for x in range(60,101,5)]
plt.yticks(y_ticks)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
else:
axis1 = plt.plot(values, axis_values, color='blue', marker='*',
linestyle='-')
plt.title(title)
plt.xlabel('Top K Important Features')
plt.ylabel('Test Accuracy')
plt.xticks([x for x in range(len(values))])
y_ticks = [x for x in range(80,101,2)]
plt.yticks(y_ticks)
plt.show
###Output
_____no_output_____
###Markdown
**(5.1)** get top K important features
###Code
tree_model = DecisionTreeClassifier(random_state=0)
tree_model.fit(X_train2, y_train2)
features_import = tree_model.feature_importances_
idx_sorted = np.argsort(-features_import)[0:7]
idx_sorted
###Output
_____no_output_____
###Markdown
**(5.2)** fit DT model with top K features using 4-folds cross validation
###Code
test_accuracy = []
validation_accuracy = []
l1 = idx_sorted[0:1]
l2 = idx_sorted[0:2]
l3 = idx_sorted[0:3]
l4 = idx_sorted[0:4]
l5 = idx_sorted[0:5]
l6 = idx_sorted[0:6]
l7 = idx_sorted[0:7]
feature_list = [l1,l2,l3,l4,l5,l6,l7]
for features in feature_list:
valid_acc = cross_val_score(tree_model, X_train2[:,features], y_train2, cv=4, scoring='accuracy')
validation_accuracy.append(valid_acc * 100)
tree_model.fit(X_train2[:,features], y_train2)
y_pred = tree_model.predict(X_test2[:, features])
test_acc = accuracy_score(y_test2, y_pred)
test_accuracy.append(test_acc * 100)
valid_acc = list(map(list, zip(*validation_accuracy)))
###Output
_____no_output_____
###Markdown
**(5.3)** Plot Validation Accuracy (y-axis) vs Top K Important Feature (x-axis) curve with 4-folds
###Code
values = [x for x in range(1,8)]
plot_importance_vs_accuracys(values[:8], valid_acc, "Top K Features VS Fold accuracy")
###Output
_____no_output_____
###Markdown
**(5.4)** plot Test Accuracy vs Top K Important Feature curve
###Code
values = [x for x in range(1,8)]
plot_importance_vs_accuracys(values[:8], test_accuracy, "Top K Features VS Test Accuracy");
###Output
_____no_output_____
###Markdown
Bagging **(6)** Use Circle Dataset. Set the number of estimators as 2, 5, 15, 20 respectively, and generate the results accordingly (i.e., accuracy and decision boundary)
###Code
for n_est in [2,5,15,20]:
estimator = BaggingClassifier(n_estimators=n_est, random_state=0)
score = estimator.fit(X_train, y_train).score(X_test, y_test)
plotEstimator(X_train, y_train, X_test, y_test, estimator, f'Bagging with n_estimator = {n_est} has accuracy = {score}')
###Output
_____no_output_____
###Markdown
**(7)** Explain why bagging can reduce the variance and mitigate the overfitting problem Bagging can create many predictors by bootstrapping the data randomly subsample the dataset many times, and train a model using each subsample.We can then aggregate our models, e.g., averaging out the predictions of each model. and this can reduce variance and overfitting Random Forest **(8)** Use Circle Dataset. Set the number of estimators as 2, 5, 15, 20 respectively, and generate the results accordingly (i.e., accuracy and decision boundary)
###Code
for n_est in [2,5,15,20]:
estimator = RandomForestClassifier(n_estimators=n_est, random_state=0)
score = estimator.fit(X_train, y_train).score(X_test, y_test)
plotEstimator(X_train, y_train, X_test, y_test, estimator, f'RF with n_estimator = {n_est} has accuracy = {score}')
###Output
_____no_output_____
###Markdown
**(9)** Compare with bagging results and explain the difference between Bagging and Random Forest The fundamental difference is that in Random forests, only a subset of features are selected at random out of the total and the best split feature from the subset is used to split each node in a tree, unlike in bagging where all features are considered for splitting a node. Boosting **(10)** Use Circle Dataset. There are 2 important hyperparameters in AdaBoost, i.e., the number of estimators (ne), and learning rate (lr). Please plot 12 subfigures as the following table's setup. Each figure should plot the decision boundary and each of their title should be the same format as {n_estimaotrs}, {learning_rate}, {accuracy}
###Code
n_estimator = [10,50,100,200]
l_rate = [0.1,1,2]
for l in l_rate:
for n_est in n_estimator:
estimator = AdaBoostClassifier(n_estimators= n_est, learning_rate= l)
score = estimator.fit(X_train, y_train).score(X_test, y_test)
plotEstimator(X_train, y_train, X_test, y_test, estimator, f' {n_est} , {l} , {score}')
###Output
_____no_output_____
###Markdown
Stacking **(11)** We have tuned the Decision Tree, Bagging, Random Forest, and AdaBoost in the previous section. Use these fine tuned model as base estimators and use Naive Bayes, Logistic Regression, and Decision Tree as aggregators to generate the results accordingly (i.e., accuracy and decision boundary) Base Estimaters
###Code
base_estimaters = list()
base_estimaters.append(('DT',DecisionTreeClassifier(criterion="entropy", random_state=0)))
base_estimaters.append(('Bagging' ,BaggingClassifier(n_estimators=5, random_state=0)))
base_estimaters.append(('RF', RandomForestClassifier(n_estimators=5, random_state=0)))
base_estimaters.append(('Adaboost', AdaBoostClassifier(n_estimators=50, learning_rate= 1, random_state=0)))
###Output
_____no_output_____
###Markdown
(11.1) Naive Bayes as Aggregator
###Code
aggregator1 =GaussianNB()
model1 = StackingClassifier(estimators=base_estimaters, final_estimator=aggregator1, cv=5)
score = model1.fit(X_train, y_train).score(X_test, y_test)
plotEstimator(X_train, y_train, X_test, y_test, model1, f'Accuracy of Gaussian as aggregator = {score}')
###Output
_____no_output_____
###Markdown
(11.2) Logistic Regression as Aggregator
###Code
aggregator2 =LogisticRegression()
model2 = StackingClassifier(estimators=base_estimaters, final_estimator=aggregator2, cv=5)
score = model2.fit(X_train, y_train).score(X_test, y_test)
plotEstimator(X_train, y_train, X_test, y_test, model2, f'Accuracy of Logistic Regression as aggregator = {score}')
###Output
_____no_output_____
###Markdown
(11.3) Decision Tree as Aggregator
###Code
aggregator3 =DecisionTreeClassifier()
model3 = StackingClassifier(estimators=base_estimaters, final_estimator=aggregator3, cv=5)
score = model3.fit(X_train, y_train).score(X_test, y_test)
plotEstimator(X_train, y_train, X_test, y_test, model3, f'Accuracy of DT as aggregator = {score}')
###Output
_____no_output_____ |
Introduction to Portfolio Construction and Analysis with Python/W3/.ipynb_checkpoints/Monte Carlo Simulation-checkpoint.ipynb | ###Markdown
Monte Carlo Simulation and Random Walk Generation $$ \frac{ S_{1+dt} - S_t}{S_t} = \mu dt + \sigma \sqrt {dt} \xi_t $$
###Code
import numpy as np
import pandas as pd
def gbm(n_years =10, n_scenarios = 1000, mu=0.07,sigma = 0.15, steps_per_year = 12, s_0 = 100.0):
"""
Evolution of a Stock Price using Geometric Browian Motion Model (Monte Carlo Simulation)
"""
dt = 1/steps_per_year
n_steps = int(n_years * steps_per_year)
rets_plus_1 = np.random.normal(loc= (1+mu*dt),scale = (sigma*np.sqrt(dt)),size = (n_steps, n_scenarios), )
rets_plus_1[0] = 1
prices = s_0*pd.DataFrame(rets_plus_1).cumprod()
return prices
import ashmodule as ash
ax = gbm(n_scenarios = 20).plot(legend = False,figsize = (12,6));
ax.set_xlim(left = 0);
gbm(n_scenarios = 10).head()
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Using IPyWidget to Interact Plotting the Monte Carlo Simulation
###Code
import ipywidgets as widgets
from IPython.display import display
import matplotlib.pyplot as plt
def show_gbm(n_scenarios=1000, mu=0.07, sigma=0.15, s_0=100.0):
"""
Draw the results of a stock price evolution under a Geometric Brownian Motion model
"""
s_0=s_0
prices = gbm(n_scenarios=n_scenarios, mu=mu, sigma=sigma, s_0=s_0)
ax = prices.plot(legend=False, color="indianred", alpha = 0.5, linewidth=2, figsize=(12,5))
ax.axhline(y=s_0, ls=":", color="black")
# draw a dot at the origin
ax.plot(0,s_0, marker='o',color='darkred', alpha=0.2)
gbm_controls = widgets.interactive(ash.show_gbm,
n_scenarios = widgets.IntSlider(min=1,max=1000,step=5),
mu =(-0.3,0.3,0.05),
sigma =(0,0.5,0.01),
s_0 =(1,500,10)
)
display(gbm_controls)
###Output
_____no_output_____
###Markdown
Using IPyWidgets to interact with Monte Carlo Simulations and CPPI
###Code
def show_cppi(n_scenarios=50, mu=0.07, sigma=0.15, m=3, floor=0.0, riskfree_rate=0.03, y_max=100,s_0=100, steps_per_year = 12):
"""
Plot the results of a Monte Carlo Simulation of CPPI
"""
start = s_0
sim_rets = ash.gbm(n_scenarios=n_scenarios, mu=mu, sigma=sigma, steps_per_year=steps_per_year)
risky_r = pd.DataFrame(sim_rets)
# run the "back"-test
btr = ash.run_cppi(risky_r=pd.DataFrame(risky_r),riskfree_rate=riskfree_rate,m=m, start=start, floor=floor)
wealth = btr["risky_r"]
# calculate terminal wealth stats
y_max=wealth.values.max()*y_max/100
ax = wealth.plot(legend = False, alpha = 0.3, color = "indianred", figsize = (12,6))
ax.axhline(y=start, ls=":", color= "black")
ax.axhline(y=start*floor, ls="--",color = "red")
ax.set_ylim(top=y_max)
cppi_controls = widgets.interactive(show_cppi,
n_scenarios=widgets.IntSlider(min=1, max=1000, step=5, value=50),
mu=(0., +.2, .01),
sigma=(0, .30, .05),
floor=(0, 2, .1),
m=(1, 5, .5),
riskfree_rate=(0, .05, .01),
y_max=widgets.IntSlider(min=0, max=100, step=1, value=100,
description="Zoom Y Axis")
)
display(cppi_controls)
r_asset = ash.gbm(n_scenarios=50)
r_asset
ash.run_cppi((r_asset))["risky_r"][0].plot(legend=False,figsize =(12,6))
ash.run_cppi(r_asset,start = 100)["Wealth"].head()
r_asset.shape
r_asset.index = pd.date_range("2000-01",periods=r_asset.shape[0],freq="MS").to_period("M")
r_asset.head()
ash.run_cppi(r_asset,start = 100)["risky_r"].plot(legend = False);
ash.run_cppi(r_asset,start = 100)["risky_r"].plot(legend = False,figsize = (12,6),color= "red", alpha = 0.3);
###Output
_____no_output_____ |
finance/Efficient Frontier.ipynb | ###Markdown
The Efficient Frontier of Optimal Portfolio Transactions Introduction[Almgren and Chriss](https://cims.nyu.edu/~almgren/papers/optliq.pdf) showed that for each value of risk aversion there is a unique optimal execution strategy. The optimal strategy is obtained by minimizing the **Utility Function** $U(x)$:\begin{equation}U(x) = E(x) + \lambda V(x)\end{equation}where $E(x)$ is the **Expected Shortfall**, $V(x)$ is the **Variance of the Shortfall**, and $\lambda$ corresponds to the trader’s risk aversion. The expected shortfall and variance of the optimal trading strategy are given by:In this notebook, we will learn how to visualize and interpret these equations. The Expected ShortfallAs we saw in the previous notebook, even if we use the same trading list, we are not guaranteed to always get the same implementation shortfall due to the random fluctuations in the stock price. This is why we had to reframe the problem of finding the optimal strategy in terms of the average implementation shortfall and the variance of the implementation shortfall. We call the average implementation shortfall, the expected shortfall $E(x)$, and the variance of the implementation shortfall $V(x)$. So, whenever we talk about the expected shortfall we are really talking about the average implementation shortfall. Therefore, we can think of the expected shortfall as follows. Given a single trading list, the expected shortfall will be the value of the average implementation shortfall if we were to implement this trade list in the stock market many times. To see this, in the code below we implement the same trade list on 50,000 trading simulations. We call each trading simulation an episode. Each episode will consist of different random fluctuations in stock price. For each episode we will compute the corresponding implemented shortfall. After all the 50,000 trading simulations have been carried out we calculate the average implementation shortfall and the variance of the implemented shortfalls. We can then compare these values with the values given by the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Set the liquidation time
l_time = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
t_risk = 1e-6
# Set the number of episodes to run the simulation
episodes = 10
utils.get_av_std(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk, trs = episodes)
# Get the AC Optimal strategy for the given parameters
ac_strategy = utils.get_optimal_vals(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk)
ac_strategy
###Output
Average Implementation Shortfall: $579,001.15
Standard Deviation of the Implementation Shortfall: $524,293.07
###Markdown
Extreme Trading StrategiesBecause some investors may be willing to take more risk than others, when looking for the optimal strategy we have to consider a wide range of risk values, ranging from those traders that want to take zero risk to those who want to take as much risk as possible. Let's take a look at these two extreme cases. We will define the **Minimum Variance** strategy as that one followed by a trader that wants to take zero risk and the **Minimum Impact** strategy at that one followed by a trader that wants to take as much risk as possible. Let's take a look at the values of $E(x)$ and $V(x)$ for these extreme trading strategies. The `utils.get_min_param()` uses the above equations for $E(x)$ and $V(x)$, along with the parameters from the trading environment to calculate the expected shortfall and standard deviation (the square root of the variance) for these strategies. We'll start by looking at the Minimum Impact strategy.
###Code
import utils
# Get the minimum impact and minimum variance strategies
minimum_impact, minimum_variance = utils.get_min_param()
###Output
_____no_output_____
###Markdown
Minimum Impact StrategyThis trading strategy will be taken by trader that has no regard for risk. In the Almgren and Chriss model this will correspond to having the trader's risk aversion set to $\lambda = 0$. In this case the trader will sell the shares at a constant rate over a long period of time. By doing so, he will minimize market impact, but will be at risk of losing a lot of money due to the large variance. Hence, this strategy will yield the lowest possible expected shortfall and the highest possible variance, for a given set of parameters. We can see that for the given parameters, this strategy yields an expected shortfall of \$197,000 dollars but has a very big standard deviation of over 3 million dollars.
###Code
minimum_impact
###Output
_____no_output_____
###Markdown
Minimum Variance StrategyThis trading strategy will be taken by trader that wants to take zero risk, regardless of transaction costs. In the Almgren and Chriss model this will correspond to having a variance of $V(x) = 0$. In this case, the trader would prefer to sell the all his shares immediately, causing a known price impact, rather than risk trading in small increments at successively adverse prices. This strategy will yield the smallest possible variance, $V(x) = 0$, and the highest possible expected shortfall, for a given set of parameters. We can see that for the given parameters, this strategy yields an expected shortfall of over 2.5 million dollars but has a standard deviation equal of zero.
###Code
minimum_variance
###Output
_____no_output_____
###Markdown
The Efficient FrontierThe goal of Almgren and Chriss was to find the optimal strategies that lie between these two extremes. In their paper, they showed how to compute the trade list that minimizes the expected shortfall for a wide range of risk values. In their model, Almgren and Chriss used the parameter $\lambda$ to measure a trader's risk-aversion. The value of $\lambda$ tells us how much a trader is willing to penalize the variance of the shortfall, $V(X)$, relative to expected shortfall, $E(X)$. They showed that for each value of $\lambda$ there is a uniquely determined optimal execution strategy. We define the **Efficient Frontier** to be the set of all these optimal trading strategies. That is, the efficient frontier is the set that contains the optimal trading strategy for each value of $\lambda$.The efficient frontier is often visualized by plotting $(x,y)$ pairs for a wide range of $\lambda$ values, where the $x$-coordinate is given by the equation of the expected shortfall, $E(X)$, and the $y$-coordinate is given by the equation of the variance of the shortfall, $V(X)$. Therefore, for a given a set of parameters, the curve defined by the efficient frontier represents the set of optimal trading strategies that give the lowest expected shortfall for a defined level of risk.In the code below, we plot the efficient frontier for $\lambda$ values in the range $(10^{-7}, 10^{-4})$, using the default parameters in our trading environment. Each point of the frontier represents a distinct strategy for optimally liquidating the same number of stocks. A risk-averse trader, who wishes to sell quickly to reduce exposure to stock price volatility, despite the trading costs incurred in doing so, will likely choose a value of $\lambda = 10^{-4}$. On the other hand, a traderwho likes risk, who wishes to postpones selling, will likely choose a value of $\lambda = 10^{-7}$. In the code, you can choose a particular value of $\lambda$ to see the expected shortfall and level of variance corresponding to that particular value of trader's risk aversion.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Plot the efficient frontier for the default values. The plot points out the expected shortfall and variance of the
# optimal strategy for the given the trader's risk aversion. Valid range for the trader's risk aversion (1e-7, 1e-4).
utils.plot_efficient_frontier(tr_risk = 1e-6)
###Output
_____no_output_____
###Markdown
The Efficient Frontier of Optimal Portfolio Transactions Introduction[Almgren and Chriss](https://cims.nyu.edu/~almgren/papers/optliq.pdf) showed that for each value of risk aversion there is a unique optimal execution strategy. The optimal strategy is obtained by minimizing the **Utility Function** $U(x)$:\begin{equation}U(x) = E(x) + \lambda V(x)\end{equation}where $E(x)$ is the **Expected Shortfall**, $V(x)$ is the **Variance of the Shortfall**, and $\lambda$ corresponds to the trader’s risk aversion. The expected shortfall and variance of the optimal trading strategy are given by:In this notebook, we will learn how to visualize and interpret these equations. The Expected ShortfallAs we saw in the previous notebook, even if we use the same trading list, we are not guaranteed to always get the same implementation shortfall due to the random fluctuations in the stock price. This is why we had to reframe the problem of finding the optimal strategy in terms of the average implementation shortfall and the variance of the implementation shortfall. We call the average implementation shortfall, the expected shortfall $E(x)$, and the variance of the implementation shortfall $V(x)$. So, whenever we talk about the expected shortfall we are really talking about the average implementation shortfall. Therefore, we can think of the expected shortfall as follows. Given a single trading list, the expected shortfall will be the value of the average implementation shortfall if we were to implement this trade list in the stock market many times. To see this, in the code below we implement the same trade list on 50,000 trading simulations. We call each trading simulation an episode. Each episode will consist of different random fluctuations in stock price. For each episode we will compute the corresponding implemented shortfall. After all the 50,000 trading simulations have been carried out we calculate the average implementation shortfall and the variance of the implemented shortfalls. We can then compare these values with the values given by the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Set the liquidation time
l_time = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
t_risk = 1e-6
# Set the number of episodes to run the simulation
episodes = 100
utils.get_av_std(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk, trs = episodes)
# Get the AC Optimal strategy for the given parameters
ac_strategy = utils.get_optimal_vals(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk)
ac_strategy
episodes = 100
ac_strategy = utils.get_optimal_vals(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk)
ac_strategy
###Output
_____no_output_____
###Markdown
Extreme Trading StrategiesBecause some investors may be willing to take more risk than others, when looking for the optimal strategy we have to consider a wide range of risk values, ranging from those traders that want to take zero risk to those who want to take as much risk as possible. Let's take a look at these two extreme cases. We will define the **Minimum Variance** strategy as that one followed by a trader that wants to take zero risk and the **Minimum Impact** strategy at that one followed by a trader that wants to take as much risk as possible. Let's take a look at the values of $E(x)$ and $V(x)$ for these extreme trading strategies. The `utils.get_min_param()` uses the above equations for $E(x)$ and $V(x)$, along with the parameters from the trading environment to calculate the expected shortfall and standard deviation (the square root of the variance) for these strategies. We'll start by looking at the Minimum Impact strategy.
###Code
import utils
# Get the minimum impact and minimum variance strategies
minimum_impact, minimum_variance = utils.get_min_param()
###Output
_____no_output_____
###Markdown
Minimum Impact StrategyThis trading strategy will be taken by trader that has no regard for risk. In the Almgren and Chriss model this will correspond to having the trader's risk aversion set to $\lambda = 0$. In this case the trader will sell the shares at a constant rate over a long period of time. By doing so, he will minimize market impact, but will be at risk of losing a lot of money due to the large variance. Hence, this strategy will yield the lowest possible expected shortfall and the highest possible variance, for a given set of parameters. We can see that for the given parameters, this strategy yields an expected shortfall of \$197,000 dollars but has a very big standard deviation of over 3 million dollars.
###Code
minimum_impact
###Output
_____no_output_____
###Markdown
Minimum Variance StrategyThis trading strategy will be taken by trader that wants to take zero risk, regardless of transaction costs. In the Almgren and Chriss model this will correspond to having a variance of $V(x) = 0$. In this case, the trader would prefer to sell the all his shares immediately, causing a known price impact, rather than risk trading in small increments at successively adverse prices. This strategy will yield the smallest possible variance, $V(x) = 0$, and the highest possible expected shortfall, for a given set of parameters. We can see that for the given parameters, this strategy yields an expected shortfall of over 2.5 million dollars but has a standard deviation equal of zero.
###Code
minimum_variance
###Output
_____no_output_____
###Markdown
The Efficient FrontierThe goal of Almgren and Chriss was to find the optimal strategies that lie between these two extremes. In their paper, they showed how to compute the trade list that minimizes the expected shortfall for a wide range of risk values. In their model, Almgren and Chriss used the parameter $\lambda$ to measure a trader's risk-aversion. The value of $\lambda$ tells us how much a trader is willing to penalize the variance of the shortfall, $V(X)$, relative to expected shortfall, $E(X)$. They showed that for each value of $\lambda$ there is a uniquely determined optimal execution strategy. We define the **Efficient Frontier** to be the set of all these optimal trading strategies. That is, the efficient frontier is the set that contains the optimal trading strategy for each value of $\lambda$.The efficient frontier is often visualized by plotting $(x,y)$ pairs for a wide range of $\lambda$ values, where the $x$-coordinate is given by the equation of the expected shortfall, $E(X)$, and the $y$-coordinate is given by the equation of the variance of the shortfall, $V(X)$. Therefore, for a given a set of parameters, the curve defined by the efficient frontier represents the set of optimal trading strategies that give the lowest expected shortfall for a defined level of risk.In the code below, we plot the efficient frontier for $\lambda$ values in the range $(10^{-7}, 10^{-4})$, using the default parameters in our trading environment. Each point of the frontier represents a distinct strategy for optimally liquidating the same number of stocks. A risk-averse trader, who wishes to sell quickly to reduce exposure to stock price volatility, despite the trading costs incurred in doing so, will likely choose a value of $\lambda = 10^{-4}$. On the other hand, a traderwho likes risk, who wishes to postpones selling, will likely choose a value of $\lambda = 10^{-7}$. In the code, you can choose a particular value of $\lambda$ to see the expected shortfall and level of variance corresponding to that particular value of trader's risk aversion.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Plot the efficient frontier for the default values. The plot points out the expected shortfall and variance of the
# optimal strategy for the given the trader's risk aversion. Valid range for the trader's risk aversion (1e-7, 1e-4).
utils.plot_efficient_frontier(tr_risk = 1e-6)
###Output
_____no_output_____ |
notebooks/20-creating-datasets.ipynb | ###Markdown
2.0: Reproducible Data Sources"In God we trust. All others must bring data.” – W. Edwards Deming"
###Code
%load_ext autoreload
%autoreload 2
import logging
from src.logging import logger
logger.setLevel(logging.INFO)
###Output
_____no_output_____
###Markdown
Introducing the `DataSource`The `DataSource` object handles downloading, unpacking, and processing raw data files, and serves as a container for some basic metadata about the raw data, including **documentation** and **license** information.Raw data files are downloaded to `paths.raw_data_path`. Cache files and unpacked raw files are saved to `paths.interim_data_path`. Example: LVQ-Pak, a Finnish phonetic datasetThe Learning Vector Quantization (lvq-pak) project includes a simple Finnish phonetic datasetconsisting 20-dimensional Mel Frequency Cepstrum Coefficients (MFCCs) labelled with target phoneme information. Our goal is to explore this dataset, process it into a useful form, and make it a part of a reproducible data science workflow. The project can be found at: http://www.cis.hut.fi/research/lvq_pak/ For this example, we are going create a `DataSource` for the LVQ-Pak dataset. The process will consist of1. Downloading and unpacking the raw data files. 2. Generating (and recording) hash values for these files.3. Adding LICENSE and DESCR (description) metadata to this DataSource4. Adding the complete `DataSource` to the Catalog Downloading Raw Data Source Files
###Code
from src.data import DataSource
from src.utils import list_dir
from src import paths
# Create a data source object
datasource_name = 'lvq-pak'
dsrc = DataSource(datasource_name)
# Add URL(s) for raw data files
dsrc.add_url("http://www.cis.hut.fi/research/lvq_pak/lvq_pak-3.1.tar")
# Fetch the files
logger.setLevel(logging.DEBUG)
dsrc.fetch()
###Output
_____no_output_____
###Markdown
By default, data files are downloaded to the `paths.raw_data_path` directory:
###Code
!ls -la $paths.raw_data_path
###Output
_____no_output_____
###Markdown
Since we did not specify a hash, or target filename, these are inferred from the downloaded file:
###Code
dsrc.file_list
###Output
_____no_output_____
###Markdown
Remove a file from the file_list
###Code
# Note that if we add a url again, we end up with more of the same file in the file list
dsrc.add_url("http://www.cis.hut.fi/research/lvq_pak/lvq_pak-3.1.tar")
dsrc.file_list
dsrc.fetch()
###Output
_____no_output_____
###Markdown
Fetch is smart enough to not redownload the same file in this case. Still, this is messy and cumbersome. We can remove entries by removing them from the `file_list`.
###Code
dsrc.file_list.pop(1)
dsrc.file_list
dsrc.fetch(force=True)
###Output
_____no_output_____
###Markdown
Sometimes we make mistakes when entering information
###Code
dsrc.add_url("http://www.cis.hut.fi/research/lvq_pak/lvq_pak-3.1.tar", name='cat', file_name='dog')
dsrc.file_list
dsrc.fetch()
!ls -la $paths.raw_data_path
###Output
_____no_output_____
###Markdown
We now have a copy of `lvq_pak-3.1.tar` called `dog`. Every time we fetch, we will fetch twice unless we get rid of the entry for `dog`.First, we will want to remove `dog` from our raw data.Let's take the "Nuke it from orbit. It's the only way to be sure" approach and clean our entire raw data directory.
###Code
!cd .. && make clean_raw
!ls -la $paths.raw_data_path
###Output
_____no_output_____
###Markdown
The other option would have been to manually remove the `dog` file and then forced a refetch. Exercise: Remove the entry for dog and refetch
###Code
# You should now only see the lvq_pak-3.1.tar file
!ls -la $paths.raw_data_path
###Output
_____no_output_____
###Markdown
Cached Downloads The DataSource object keeps track of whether the fetch has been performed successfully. Subsequent downloads will be skipped by default:
###Code
dsrc.fetch()
###Output
_____no_output_____
###Markdown
We can override this, which will check if the downloaded file exists, redownloading if necessary
###Code
dsrc.fetch(force=True)
###Output
_____no_output_____
###Markdown
In the previous case, the raw data file existed on the filesystem, and had the correct hash. If the local file has a checksum that doesn't match the saved hash, it will be re-downloaded automatically. Let's corrupt the file and see what happens.
###Code
!echo "XXX" >> $paths.raw_data_path/lvq_pak-3.1.tar
dsrc.fetch(force=True)
###Output
_____no_output_____
###Markdown
Exercise: Creating an F-MNIST `DataSource` For this excercise, you are going build a `DataSource` out of the Fashion-MNIST dataset.[Fashion-MNIST][FMNIST] is available from GitHub. Looking at their [README], we see that the raw data is distributed as a set of 4 files with the following checksums:[FMNIST]: https://github.com/zalandoresearch/fashion-mnist[README]: https://github.com/zalandoresearch/fashion-mnist/blob/master/README.md| Name | Content | Examples | Size | Link | MD5 Checksum|| --- | --- |--- | --- |--- |--- || `train-images-idx3-ubyte.gz` | training set images | 60,000|26 MBytes | [Download](http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz)|`8d4fb7e6c68d591d4c3dfef9ec88bf0d`|| `train-labels-idx1-ubyte.gz` | training set labels |60,000|29 KBytes | [Download](http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz)|`25c81989df183df01b3e8a0aad5dffbe`|| `t10k-images-idx3-ubyte.gz` | test set images | 10,000|4.3 MBytes | [Download](http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz)|`bef4ecab320f06d8554ea6380940ec79`|| `t10k-labels-idx1-ubyte.gz` | test set labels | 10,000| 5.1 KBytes | [Download](http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz)|`bb300cfdad3c16e7a12a480ee83cd310`|By the end of this running example, you will build a `DataSource` that downloads these raw files and verifies that the hash values are as expected. You should make sure to include **Description** and **License** metadata in this `DataSource`. When you are finished, save the `DataSource` to the Catalog. Exercise: Download Raw Data Source Files for F-MNIST
###Code
# Create an fmnist data source object
# Add URL(s) for raw data files
# Note that you will be adding four files to the DataSource object
# and that the hash values have already been provided above!
# Fetch the files
# Check for your new files
!ls -la $paths.raw_data_path
###Output
_____no_output_____
###Markdown
Unpacking Raw Data Files
###Code
unpack_dir = dsrc.unpack()
###Output
_____no_output_____
###Markdown
By default, files are decompressed/unpacked to the `paths.interim_data_path`/`datasource_name` directory:
###Code
!ls -la $paths.interim_data_path
# We unpack everything into interim_data_path/datasource_name, which is returned by `unpack()`
!ls -la $unpack_dir
!ls -la $unpack_dir/lvq_pak-3.1
###Output
_____no_output_____
###Markdown
Exercise: Unpack raw data files for F-MNIST
###Code
# Check for your files in the unpacked dirs
###Output
_____no_output_____
###Markdown
Adding Metadata to Raw DataWait, what have we actually downloaded, and are we actually allowed to **use** this data? We keep track of two key pieces of metadata along with a raw dataset:* Description (`DESCR`) Text: Human-readable text describing the dataset, its source, and what it represents* License (`LICENSE`) Text: Terms of use for this dataset, often in the form of a license agreement Often, a dataset comes complete with its own README and LICENSE files. If these are available via URL, we can add these like we add any other data file, tagging them as metadata using the `name` field:
###Code
dsrc.add_url("http://www.cis.hut.fi/research/lvq_pak/README",
file_name='lvq-pak.readme', name='DESCR')
dsrc.fetch()
dsrc.unpack()
# We now fetch 2 files. Note the metadata has been tagged accordingly in the `name` field
dsrc.file_list
###Output
_____no_output_____
###Markdown
We need to dig a little deeper to find the license. we find it at the beginning of the README file contained within that distribution:
###Code
!head -35 $paths.interim_data_path/lvq-pak/lvq_pak-3.1/README
###Output
_____no_output_____
###Markdown
Rather than trying to be clever, let's just add the license metadata from a python string that we cut and paste from the above.
###Code
license_txt = '''
************************************************************************
* *
* LVQ_PAK *
* *
* The *
* *
* Learning Vector Quantization *
* *
* Program Package *
* *
* Version 3.1 (April 7, 1995) *
* *
* Prepared by the *
* LVQ Programming Team of the *
* Helsinki University of Technology *
* Laboratory of Computer and Information Science *
* Rakentajanaukio 2 C, SF-02150 Espoo *
* FINLAND *
* *
* Copyright (c) 1991-1995 *
* *
************************************************************************
* *
* NOTE: This program package is copyrighted in the sense that it *
* may be used for scientific purposes. The package as a whole, or *
* parts thereof, cannot be included or used in any commercial *
* application without written permission granted by its producents. *
* No programs contained in this package may be copied for commercial *
* distribution. *
* *
* All comments concerning this program package may be sent to the *
* e-mail address '[email protected]'. *
* *
************************************************************************
'''
dsrc.add_metadata(contents=license_txt, kind='LICENSE')
###Output
_____no_output_____
###Markdown
Under the hood, this will create a file, storing the creation instructions in the same `file_list` we use to store the URLs we wish to download:
###Code
dsrc.file_list
###Output
_____no_output_____
###Markdown
Now when we fetch, the license file is created from this information:
###Code
logger.setLevel(logging.DEBUG)
dsrc.fetch(force=True)
dsrc.unpack()
!ls -la $paths.raw_data_path
###Output
_____no_output_____
###Markdown
Exercise: Add metadata to F-MNIST Adding Raw Data to the Catalog
###Code
from src import workflow
workflow.available_datasources()
workflow.add_datasource(dsrc)
workflow.available_datasources()
###Output
_____no_output_____
###Markdown
We will make use of this raw dataset catalog later in this tutorial. We can now load our `DataSource` by name:
###Code
ds = DataSource.from_name('lvq-pak')
ds.file_list
###Output
_____no_output_____
###Markdown
Exercise: Add F-MNIST to the Raw Dataset Catalog
###Code
# Your fmnist dataset should now show up here:
workflow.available_datasources()
###Output
_____no_output_____
###Markdown
Nuke it from OrbitNow we can blow away all the data that we've downloaded and set up so far, and recreate it from the workflow datasource. Or, use some of our `make` commands!
###Code
!cd .. && make clean_raw
!ls -la $paths.raw_data_path
!cd .. && make fetch_sources
!ls -la $paths.raw_data_path
# What about fetch and unpack?
!cd .. && make clean_raw && make clean_interim
!ls -la $paths.raw_data_path
!cd .. && make unpack_sources
!ls -la $paths.raw_data_path
!ls -la $paths.interim_data_path
###Output
_____no_output_____ |
examples/compare-czt-fft.ipynb | ###Markdown
Example: Compare CZT to FFT
###Code
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
# CZT package
import czt
# https://github.com/garrettj403/SciencePlots
plt.style.use(['science', 'notebook'])
###Output
_____no_output_____
###Markdown
Generate Time-Domain Signal for Example
###Code
# Time data
t = np.arange(0, 20, 0.1) * 1e-3
dt = t[1] - t[0]
Fs = 1 / dt
N = len(t)
print("Sampling period: {:5.2f} ms".format(dt * 1e3))
print("Sampling frequency: {:5.2f} kHz".format(Fs / 1e3))
print("Nyquist frequency: {:5.2f} kHz".format(Fs / 2 / 1e3))
print("Number of points: {:5d}".format(N))
# Signal data
def model1(t):
"""Exponentially decaying sine wave with higher-order distortion."""
output = (1.0 * np.sin(2 * np.pi * 1e3 * t) +
0.3 * np.sin(2 * np.pi * 2.5e3 * t) +
0.1 * np.sin(2 * np.pi * 3.5e3 * t)) * np.exp(-1e3 * t)
return output
def model2(t):
"""Exponentially decaying sine wave without higher-order distortion."""
output = (1.0 * np.sin(2 * np.pi * 1e3 * t)) * np.exp(-1e3 * t)
return output
sig = model1(t)
# Plot time-domain data
plt.figure()
t_tmp = np.linspace(0, 6, 601) / 1e3
plt.plot(t_tmp*1e3, model1(t_tmp), 'k', lw=0.5, label='Data')
plt.plot(t*1e3, sig, 'ro--', label='Samples')
plt.xlabel("Time (ms)")
plt.ylabel("Signal")
plt.xlim([0, 6])
plt.legend()
plt.title("Time-domain signal");
###Output
_____no_output_____
###Markdown
Frequency-domain
###Code
sig_fft = np.fft.fftshift(np.fft.fft(sig))
f_fft = np.fft.fftshift(np.fft.fftfreq(N, d=dt))
freq, sig_f = czt.time2freq(t, sig)
plt.figure()
plt.plot(f_fft / 1e3, np.abs(sig_fft), 'k', label='FFT')
plt.plot(freq / 1e3, np.abs(sig_f), 'ro--', label='CZT')
plt.xlabel("Frequency (kHz)")
plt.ylabel("Signal magnitude")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3])
plt.legend()
plt.title("Frequency-domain")
plt.savefig("results/freq-domain.png", dpi=600)
plt.figure()
plt.plot(f_fft / 1e3, np.angle(sig_fft), 'k', label='FFT')
plt.plot(freq / 1e3, np.angle(sig_f), 'ro--', label='CZT')
plt.xlabel("Frequency (kHz)")
plt.ylabel("Signal phase")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3])
plt.legend()
plt.title("Frequency-domain");
###Output
_____no_output_____
###Markdown
Example: Compare CZT to FFT
###Code
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
# CZT package
import czt
# https://github.com/garrettj403/SciencePlots
plt.style.use(['science', 'notebook'])
###Output
_____no_output_____
###Markdown
Generate Time-Domain Signal
###Code
# Time data
t = np.arange(0, 20, 0.1) * 1e-3
dt = t[1] - t[0]
Fs = 1 / dt
N = len(t)
print("Sampling period: {:5.2f} ms".format(dt * 1e3))
print("Sampling frequency: {:5.2f} kHz".format(Fs / 1e3))
print("Nyquist frequency: {:5.2f} kHz".format(Fs / 2 / 1e3))
print("Number of points: {:5d}".format(N))
# Signal data
def model1(t):
"""Exponentially decaying sine wave with higher-order distortion."""
output = (1.0 * np.sin(2 * np.pi * 1e3 * t) +
0.3 * np.sin(2 * np.pi * 2.5e3 * t) +
0.1 * np.sin(2 * np.pi * 3.5e3 * t)) * np.exp(-1e3 * t)
return output
def model2(t):
"""Exponentially decaying sine wave without higher-order distortion."""
output = (1.0 * np.sin(2 * np.pi * 1e3 * t)) * np.exp(-1e3 * t)
return output
sig = model1(t)
# Plot time-domain data
plt.figure()
t_tmp = np.linspace(0, 6, 601) / 1e3
plt.plot(t_tmp*1e3, model1(t_tmp), 'k', lw=0.5, label='Data')
plt.plot(t*1e3, sig, 'ro--', label='Samples')
plt.xlabel("Time (ms)")
plt.ylabel("Signal")
plt.xlim([0, 6])
plt.legend()
plt.title("Time-domain signal");
###Output
_____no_output_____
###Markdown
Frequency-domain
###Code
sig_fft = np.fft.fftshift(np.fft.fft(sig))
f_fft = np.fft.fftshift(np.fft.fftfreq(N, d=dt))
freq, sig_f = czt.time2freq(t, sig)
# Plot results
fig1 = plt.figure(1)
frame1a = fig1.add_axes((.1,.3,.8,.6))
plt.plot(f_fft / 1e3, np.abs(sig_fft), 'k', label='FFT')
plt.plot(freq / 1e3, np.abs(sig_f), 'ro--', label='CZT')
plt.ylabel("Signal magnitude")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3])
plt.legend()
plt.title("Frequency-domain")
frame1b = fig1.add_axes((.1,.1,.8,.2))
plt.plot(f_fft / 1e3, (np.abs(sig_fft) - np.abs(sig_f)) * 1e13, 'r-', label="Data")
plt.xlabel("Frequency (kHz)")
plt.ylabel("Residual\n" + r"($\times10^{-13}$)")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3])
plt.savefig("results/freq-domain.png", dpi=600)
# Plot results
fig2 = plt.figure(2)
frame2a = fig2.add_axes((.1,.3,.8,.6))
plt.plot(f_fft / 1e3, np.angle(sig_fft), 'k', label='FFT')
plt.plot(freq / 1e3, np.angle(sig_f), 'ro--', label='CZT')
plt.ylabel("Signal phase")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3])
plt.legend()
plt.title("Frequency-domain")
frame2b = fig2.add_axes((.1,.1,.8,.2))
plt.plot(f_fft / 1e3, (np.angle(sig_fft) - np.angle(sig_f)) * 1e13, 'r-', label="Data")
plt.xlabel("Frequency (kHz)")
plt.ylabel("Residual\n" + r"($\times10^{-13}$)")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3]);
###Output
_____no_output_____ |
GRBAnalysis/1.LATGRBAnalysis/1.LATGRBAnalysis.ipynb | ###Markdown
LAT Gamma-Ray Burst AnalysisThis procedure provides a step-by-step example of extracting and modeling a LAT Gamma-Ray Burst observation and modeling the prompt and temporally extended emissions using the X-Ray Spectral Fitting Package (**Xspec**) and **gtlike**, respectively. It should be noted that the LAT Low Energy (LLE) data products can also be used for LAT-detected GRBs (see [GRB Analysis Using GTBurst](https://fermidev.gsfc.nasa.gov/ssc/data/analysis/scitools/gtburst.html)). Prerequisites* [gtbin](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtbin.txt)* [gtdiffrsp](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtdiffrsp.txt)* [gtexpmap](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtexpmap.txt)* [gtfindsrc](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtfindsrc.txt)* [gtltcube](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtltcube.txt)* [gtmktime](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtmktime.txt)* [gtrspgen](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtrspgen.txt)* [gtselect](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtselect.txt)* [gtvcut](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtvcut.txt)* XSPEC, used as a spectral analysis tool in Step 3 of this procedure (See [Xanadu Data Analysis for X-Ray Astronomy](http://heasarc.gsfc.nasa.gov/docs/xanadu/).)* The FITS viewer [*fv*](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/heasarc.gsfc.nasa.gov/ftools/fv.html)* The astronomical imaging and data visualization application [*ds9*](http://hea-www.harvard.edu/RD/ds9/) AssumptionsIt is assumed that:* The referenced files reside in your working directory.* You know the time and location of the burst you wish to analyze. Note: For this thread, we will analyze GRB080916C, one of the brightest LAT GRBs on record. The relevant burst properties are: * T0 = 00:12:45.614 UT, 16 September 2008, corresponding to 243216766.614 seconds (MET) * Trigger 243216766 * RA = 121.8 degrees * Dec = -61.3 degrees * You have extracted the files used in this tutorial. You can download them in the code cell below, or you can extract them yourself in the [LAT Data Server](http://fermi.gsfc.nasa.gov/cgi-bin/ssc/LAT/LATDataQuery.cgi) with the following selections:```GRB080916CSearch Center (RA,Dec) = (121.8,-61.3)Radius = 40 degreesStart Time (MET) = 243216266.6 seconds (2011-03-28T00:00:00)Stop Time (MET) = 243218766.6 seconds (2011-04-04T00:00:00)Minimum Energy = 100 MeVMaximum Energy = 300000 MeV``` In this case, the GRB in question is of a sufficiently short duration, e.g. ~10's of seconds, so that the accumulation of LAT background counts is negligible. In order to study delayed emission, e.g. 10's of minutes to ~hour timescales, a likelihood analysis will be required.
###Code
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/latGrbAnalysis/LAT_GRB_analysis.tgz
!mkdir data
!mv LAT_GRB_analysis.tgz ./data
!tar -xzvf ./data/LAT_GRB_analysis.tgz -C ./data
###Output
_____no_output_____
###Markdown
Steps:1. [Localize the GRB.](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/lat_grb_analysis.htmlTS)2. [Generating the analysis files.](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/lat_grb_analysis.htmlFILESGEN)3. [Binned analysis with XSPEC (prompt emission).](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/lat_grb_analysis.htmlXSPEC)4. [Unbinned analysis using gtlike (extended emission).](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/lat_grb_analysis.htmlGTLIKE)**NOTE**: During the analysis of the prompt emission (Steps 1 to 3) we will make use of the `P8R3_TRANSIENT020_V2` [response function](http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_LAT_IRFs/IRF_overview.html), while in the analysis of the extended emission (Step 4) the `P8R3_SOURCE_V2` response function will be used. 1. Localize the GRB**a) Select LAT data during prompt burst phase**This can either be done using a time interval ascertained from data from other instruments (e.g., using the GBM trigger time and T90 values reported in the [Fermi/GBM circular](http://gcn.gsfc.nasa.gov/gcn3/8245.gcn3)), or it can be estimated directly from the LAT light curve. Open the light curve `lc_zmax100.fits` with [*fv*](http://heasarc.nasa.gov/ftools/fv/):
###Code
!fv ./data/LAT_GRB_analysis/lc_zmax100.fits
###Output
_____no_output_____
###Markdown
You should get something that looks like this: Here, we have plotted TIME-243216766 on the x-axis (with TIMEDEL as error) and COUNTS on the y-axis (with ERROR as error). Hovering the cursor over the plot will yield its x-y coordinates; in this case, a plausible estimate of the LAT emission interval is (T0, T0+40s).We run **gtselect** to extract the data for this time interval.Remember to set `evclass=16` on the command line to ensure that we retain the transient class events:
###Code
%%bash
gtselect evclass=16
./data/LAT_GRB_analysis/filtered_zmax100.fits
./data/LAT_GRB_analysis/localize_zmax100.fits
INDEF
INDEF
15
243216766
243216806
100
300000
100
###Output
_____no_output_____
###Markdown
Note that we have also reduced the acceptance cone to 15 degrees to filter out non-burst photons. **b) Run the localization tools, gtfindsrc and gtbin**If the data are essentially background-free as is the case here with a burst duration of ~50 sec, one can run the localization tools **gtfindsrc** and **gtbin** directly on the FT1 file (obtained when downloading the data file from the FSSC LAT Data server).**gtfindsrc** is necessary to centroid the GRB. For longer intervals where the background is significant, we can model the instrumental and celestial backgrounds using diffuse model components. For these data, the integration time is about 40 seconds so the diffuse and instrumental background components will make a negligible contribution to the total counts, so we proceed assuming they are negligible.We run **gtfindsrc** first to find the local maximum of the log-likelihood of a point source model as well as an estimate of the error radius. We will use this information to specify the size of the TS map in order to ensure that it contains the error circles we desire.
###Code
%%bash
gtfindsrc
./data/LAT_GRB_analysis/localize_zmax100.fits
./data/LAT_GRB_analysis/L1506171634094365357F22_SC00.fits
./data/LAT_GRB_analysis/GRB080916C_gtfindsrc.txt
P8R3_TRANSIENT020_V3
none
none
none
CEL
121.8
-61.3
MINUIT
1e-2
0.01
###Output
_____no_output_____
###Markdown
In this example of running **gtfindsrc**, the `FT2.fits` file was the renamed spacecraft data file downloaded from the FSSC LAT Data server.Since our source model comprises only a point source to represent the signal from the GRB, we do not provide a source model file or a target source name.Similarly, since the exposure map is used for diffuse components, we do not need to provide an unbinned exposure map. Use of a livetime cube will make the point source exposure calculation faster, but for integrations less than 1000 s, it is generally not needed. We have now obtained a position of maximum likelihood; we will use (119.861, -56.581) as our burst location from now on. It should be noted that GRB080916C is an exceptionally bright event in the LAT, and centroiding it with **gtfindsrc** is fast and adequate. In many other cases, a GRB may have far fewer LAT counts and the creation of a counts map using **gtbin** will be useful in localizing it:
###Code
%%bash
gtbin
CMAP
./data/LAT_GRB_analysis/localize_zmax100.fits
./data/LAT_GRB_analysis/GRB080916C_counts_map.fits
NONE
30
30
0.2
CEL
119.861
-56.581
0
AIT
###Output
_____no_output_____
###Markdown
We can now view the counts map in *ds9*:
###Code
!ds9 ./data/LAT_GRB_analysis/GRB080916C_counts_map.fits
###Output
_____no_output_____
###Markdown
The counts map should look something like this: 2. Generating the analysis filesIn this subsection, we'll use the same data we extracted as for the localization analysis above.The purpose is to illustrate the steps necessary to model a GRB that is significantly fainter than GRB080916C; i.e., one for which the residual and diffuse backgrounds need to be modeled. This means that we will include diffuse components in the model definition and that will necessitate the exposure map calculation in order for the code to compute the predicted number of events. We'll see from the fit to the data that these diffuse components do indeed provide a negligible contribution to the overall counts for this burst. **a) Data subselection**Rerun **gtselect** with (119.861, -56.581) as the new search center:
###Code
%%bash
gtselect evclass=16
./data/LAT_GRB_analysis/filtered_zmax100.fits
./data/LAT_GRB_analysis/prompt_select.fits
119.861
-56.581
15
243216766
243216806
100
300000
100
###Output
_____no_output_____
###Markdown
**b) Model definition**The model will include a point source at the GRB location, an isotropic component (to represent the extragalactic diffuse and/or the residual background), and a Galactic diffuse component that uses the recommend Galactic diffuse model, `gal_2yearp7v6_v0.fits`. This file is available at the [LAT background models](http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html) page via the [FSSC Data Access](http://fermi.gsfc.nasa.gov/ssc/data/access/) page.The easiest way to generate a simple 3 component model like this would be to use the [modeleditor](http://www.slac.stanford.edu/exp/glast/wb/prod/pages/sciTools_modeleditor/modelEditor.html) program (included in the [Fermitools](http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/)) by typing `ModelEditor` at the prompt. Here, we have added three sources to our model:1. GRB_080916C (you can rename the source by typing into the "Source Name:" text input box), with a PowerLaw2 spectrum. (The [Model Selection](http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Likelihood/Model_Selection.html) page of the Cicerone lists the possible spectral models.) We have adjusted the Lower Limit of its spectrum to be 100.0. We have also inputted the RA and Dec (calculated from gtfindsrc) into its spatial model. We have kept all other default values.2. GALPROP Diffuse (there is a specific option for this in the "Source" menu). Edit the `File:` entry of the spatial model to point to your local copy of `gll_iem_v07.fits`. We have kept all other defaults.3. Extragalactic Diffuse (there is a specific option for this). We have kept all the default values.If our analysis region had been close to any known LAT sources, we would have had to include them in our model (see this [tutorial](http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/likelihood_tutorial.htmlcreateSourceModel)). The xml file `GRB080916C_model.xml` should look like this:```xml ``` You can also create and edit model files by hand rather than use the modeleditor so long as the sources have the correct formats. For your convenience, you can create a local copy of the xml by running the python script below.
###Code
with open('./data/LAT_GRB_analysis/GRB080916C_model.xml', 'w') as file:
file.write("""<?xml version="1.0" ?>
<source_library title="Source Library" xmlns="http://fermi.gsfc.nasa.gov/source_library">
<source name="GRB_080916C" type="PointSource">
<spectrum type="PowerLaw2">
<parameter free="true" max="1000.0" min="1e-05" name="Integral" scale="1e-06" value="1.0"/>
<parameter free="true" max="-1.0" min="-5.0" name="Index" scale="1.0" value="-2.0"/>
<parameter free="false" max="200000.0" min="20.0" name="LowerLimit" scale="1.0" value="20.0"/>
<parameter free="false" max="200000.0" min="20.0" name="UpperLimit" scale="1.0" value="200000.0"/>
</spectrum>
<spatialModel type="SkyDirFunction">
<parameter free="false" max="360.0" min="0.0" name="RA" scale="1.0" value="119.861"/>
<parameter free="false" max="90.0" min="-90.0" name="DEC" scale="1.0" value="-56.581"/>
</spatialModel>
</source>
<source name="GALPROP Diffuse Source" type="DiffuseSource">
<spectrum type="ConstantValue">
<parameter free="true" max="10.0" min="0.0" name="Value" scale="1.0" value="1.0"/>
</spectrum>
<spatialModel file="$(FERMI_DIR)/refdata/fermi/galdiffuse/gll_iem_v07.fits" type="MapCubeFunction">
<parameter free="false" max="1000.0" min="0.001" name="Normalization" scale="1.0" value="1.0"/>
</spatialModel>
</source>
<source name="Extragalactic Diffuse Source" type="DiffuseSource">
<spectrum type="PowerLaw">
<parameter free="true" max="100.0" min="1e-05" name="Prefactor" scale="1e-07" value="1.6"/>
<parameter free="false" max="-1.0" min="-3.5" name="Index" scale="1.0" value="-2.1"/>
<parameter free="false" max="200.0" min="50.0" name="Scale" scale="1.0" value="100.0"/>
</spectrum>
<spatialModel type="ConstantValue">
<parameter free="false" max="10.0" min="0.0" name="Value" scale="1.0" value="1.0"/>
</spatialModel>
</source>
</source_library>""")
###Output
_____no_output_____
###Markdown
**c) Refining the good time intervals (GTIs)**In general, our next step would be to run **gtmktime** to remove the time intervals whose events fell outside of our zenith angle cut and apply temporal cuts to the data based on the spacecraft file (`FT2.fits`). However, as our data encompasses a short period of time, this step is inappropriate in this case (**gtmktime** will report errors).It would be necessary if were analyzing a longer period of time such as a longer burst, or extended emission as at the end of this thread (see the [Likelihood Tutorial](http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/likelihood_tutorial.html) for more information).Also, if we use **gtvcut** to review the file `prompt_select.fits`, we can see that the GTIs span the entire time selection we have made. **d) Diffuse response calculation**Since we are dealing with `evclass=16` (transient class) events, we need to run the **gtdiffrsp** tool.For each diffuse component in the model, the **gtdiffrsp** tool populates the `DIFRSP0` and `DIFRSP1` columns. They contain the integral over the source extent (for the Galactic and isotropic components this is essentially the entire sky) of the source intensity spatial distribution times the PSF and effective area. It computes the counts model density of the various diffuse components at each measured photon location, arrival time, and energy, and this information is used in maximizing the likelihood computation. This integral is also computed for the point sources in the model, but since those sources are delta-functions in sky position, the spatial part of the integral is trivial.Note that the large size of the [new Galactic diffuse background model](http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html) makes this a very resource-intensive process.
###Code
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/aux/gll_iem_v07.fits
!mv gll_iem_v07.fits $FERMI_DIR/refdata/fermi/galdiffuse
%%bash
gtdiffrsp
./data/LAT_GRB_analysis/prompt_select.fits
./data/LAT_GRB_analysis/FT2.fits
./data/LAT_GRB_analysis/GRB080916C_model.xml
P8R3_TRANSIENT020_V3
###Output
_____no_output_____
###Markdown
As mentioned before, **gtdiffrsp** modifies the input file by adding values to the `DIFRSP0` and `DIFRSP1` columns. In the tar file, for comparison purposes, the user can find two copies of the input file, one used as input of **gtdiffrsp** (named `prompt_select_pre_gtdiffrsp.fits`) and one obtained after running with **gtdiffrsp** and with the columns modified (named `prompt_select.fits`). **e) Livetime cube generation**For analysis of longer time intervals, we would need to run **gtltcube** to calculate a livetime cube. For this analysis, this step is unnecessary due to the short timescales involved. **f) Exposure map generation**We now use **gtexpmap** to generate the [exposure map](http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Data_Exploration/livetime_and_exposure.html). Note that the exposure maps from this tool are intended for use with **unbinned likelihood analysis only**:
###Code
%%bash
gtexpmap
./data/LAT_GRB_analysis/prompt_select.fits
./data/LAT_GRB_analysis/FT2.fits
none
./data/LAT_GRB_analysis/prompt_expmap.fits
P8R3_TRANSIENT020_V3
25
100
100
20
###Output
_____no_output_____
###Markdown
The radius of the source region should be larger than the extraction region in the FT1 data in order to account for PSF tail contributions of sources just outside the extraction region.For energies down to 100 MeV, a 10 degree buffer is recommended (i.e., the total radius is the sum of the extraction radius and the buffer area, totaling 25 in our case); for higher energy lower bounds, e.g., 1 GeV, 5 degrees or less is acceptable. Again, note that we did not provide an "exposure hypercube" (the livetime cube) file.For data durations less than about 1ks, **gtexpmap** will execute faster doing the time integration over the livetimes in the FT2 file directly. For longer integrations, computing the livetime cube with **gtltcube** will be faster (more information can be found in the [Explore LAT Data section](http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/explore_latdata.html)). At this step, the flux and spectral shape of the GRB prompt emission can be estimated using the **gtlike** tool (see section 4f). 3. Binned analysis with XSPEC (prompt emission)We will now perform a spectral analysis on the prompt emission using XSPEC. (A basic knowledge of the use of XSPEC is assumed.)This requires a `PHA` (spectral) file and a `RSP` (response) file. It should be noted that as an alternative to XSPEC, the RMFIT software (available as a user contribution) can be used for spectral modeling; however, it is not distributed as part of the Fermitools. **a) Generating PHA and RSP files**We use **gtbin** to create the `PHA1` file (the choice of `PHA1` for `Type of output file` indicates that you want to create a `PHA` file — the standard FITS file containing a single binned spectrum — spanning the entire time range):
###Code
%%bash
gtbin
PHA1
./data/LAT_GRB_analysis/prompt_select.fits
./data/LAT_GRB_analysis/080916C_LAT.pha
./data/LAT_GRB_analysis/FT2.fits
LOG
100
300000
30
###Output
_____no_output_____
###Markdown
The **gtrspgen** tool is then run to generate an XSPEC-compatible response matrix from the LAT IRFs.
###Code
%%bash
gtrspgen
PS
./data/LAT_GRB_analysis/080916C_LAT.pha
./data/LAT_GRB_analysis/FT2.fits
./data/LAT_GRB_analysis/080916C_LAT.rsp
90
0.5
CALDB
LOG
100
300000
100
###Output
_____no_output_____
###Markdown
**Notes**:* One should always use the `PS` response calculation method despite the option of using `GRB`. The latter was a method used in the early stages of the software creation but was later never fully developed. Ultimately, the `PS` method should always be more accurate, in particular for longer bursts. For short bursts, the difference in results and execution time between `PS` and `GRB` is negligible.* In **gtrspgen** you choose the incident photon energy bins; i.e., the energy bins over which the incident photon model is computed. **gtrspgen** reads the output photon channel energy grid from the PHA file. The RSP created by **gtrspgen** is the mapping from the incident photon energy bins into the output photon channels. These incident photon energy bins need not be the same as the output channels and they should generally over-sample them: * If there are only a few channels then the calculation of the expected number of photons in each channel will be more accurate if there are more incident photon energy bins. * You might want to include some incident photon energy bins above and below the range of channels to account for the LAT's finite energy resolution. Incident energy bins above the highest channel energy is particularly important if some for the photon's energy leaks out of the detector. **b) Backgrounds**For the prompt emission of GRB 080916C (and most LAT bursts), there is minimal background contamination. For analyses of longer integrations, one can estimate the background using off-source regions as for more traditional X-ray analyses. **c) Running XSPEC**You now have the two files necessary to analyze the burst spectrum with XSPEC:* A PHA file with the spectrum.* A RSP file with the response function.Note that there is no background file. All non-burst sources are expected to produce less than 1 photon in the extraction region during the burst! Here we provide the simplest example of fitting a spectrum with XSPEC; for further details you should consult the [XSPEC manual](http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/). 1. Start XSPEC**Note**: The default version is now release 12 (XSPEC12). 2. Load in the data: ```%%bash>>xspecdata ./data/080916C_LAT.pha``` When you specify a data file, XSPEC will try to load the response file in the PHA file's header. Alternatively, you can specify the response file separately with the command `response 080916C_LAT.rsp`.We now load in a power law model for fitting the data. For more information on available models, see [this example](http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/xspec11/manual/node26.html). 3. Load the model: ```%%bash>>xspecmodel pow``` 4. Set XSPEC to plot the data and to select the statistical method for fitting: ```bash>>xspeccpd /xssetplot energyplot ldata chistatistic cstat``` The `cpd` command sets the current plotting device, which in this case is the `xserve` option (an xwindow that persists after XSPEC has been closed).The next two commands tell XSPEC to create a logarithmic (the "l" of `ldata`) plot of the energy (along the x-axis), using the data file specified before, with the fit statistic. (Consult the [manual](http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/xspec11/manual/node26.html) for another example.)It is important to note that, for LAT GRB analysis, we generally want to use the C-statistic instead of chi-squared due to the small number of counts. (However, the command for plotting is still `chi` or `chisq` regardless of the statistic used.) We have set this in the last step. 5. Perform a fit and plot the results: ```%%bash>>xspecfitplot ldata residplot ldata ratio``` They should all be invoked in the same xspec instance, so combining all of the steps above will yield:
###Code
%%bash
#For ldata resid
xspec
data ./data/LAT_GRB_analysis/080916C_LAT.pha
model pow
cpd /xs
setplot energy
plot ldata chi
statistic cstat
fit
plot ldata resid
###Output
_____no_output_____
###Markdown
This will give you something that looks like:
###Code
%%bash
# For ldata ratio
xspec
data ./data/LAT_GRB_analysis/080916C_LAT.pha
model pow
cpd /xs
setplot energy
plot ldata chi
statistic cstat
fit
plot ldata ratio
###Output
_____no_output_____
###Markdown
And this will give you something that looks like: 4. Unbinned analysis using gtlike (temporally expanded emission)**a) Data subselection**Here, we will search for emission which may occur after the prompt GRB event; temporally extended high-energy emission has been detected in a large number of LAT bursts. We rerun **gtselect** on a time interval of ~40 to 400 seconds after the trigger on the file downloaded from the archive (i.e. the EV file) and renamed `FT1.fits`, choosing to [exclude "transient"](http://fermi.gsfc.nasa.gov/ssc/data/analysis/LAT_caveats.html) class photons for the analysis of extended emission. (A longer interval has been chosen to demonstrate **gtmktime**, **gtltcube**, etc.)Remember to set `evclass=128` on the command line to ensure that we use the source class events.
###Code
# Make a copy of the EV file and rename it to FT1.fits.
!cp ./data/LAT_GRB_analysis/L1506171634094365357F22_EV00.fits ./data/LAT_GRB_analysis/FT1.fits
%%bash
gtselect evclass=128
./data/LAT_GRB_analysis/FT1.fits
./data/LAT_GRB_analysis/extended_select.fits
119.861
-56.581
15
243216806
243217166
100
300000
100
###Output
_____no_output_____
###Markdown
**b) Refining the GTIs**Since our subselection encompasses a longer period of time, we run gtmktime to exclude bad time intervals with the filter expression suggested in the [Cicerone](https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/):
###Code
%%bash
gtmktime
./data/LAT_GRB_analysis/FT2.fits
(DATA_QUAL>0)&&(LAT_CONFIG==1)
yes
./data/LAT_GRB_analysis/extended_select.fits
./data/LAT_GRB_analysis/extended_mktime.fits
###Output
_____no_output_____
###Markdown
Note: In an analysis of *transient* class events, we set the data quality portion of the filter expression to `DATA_QUAL>0` to retain these events. **c) Diffuse response calculation**We run now **gtdiffrsp**, making sure to use the correct response function.Again, note that the pass 8 Galactic diffuse background model causes this to be very resource-intensive. The tool modifies the input event data file, inserting values in the `DIFRSP0` and `DIFRSP1` columns.
###Code
%%bash
gtdiffrsp
./data/LAT_GRB_analysis/extended_mktime.fits
./data/LAT_GRB_analysis/FT2.fits
./data/LAT_GRB_analysis/GRB080916C_model.xml
P8R3_SOURCE_V3
###Output
_____no_output_____
###Markdown
**d) Livetime cube generation**Now that our data file encompasses a longer period of time, it requires us to calculate the livetime cube using **gtltcube**:
###Code
%%bash
gtltcube
./data/LAT_GRB_analysis/extended_mktime.fits
./data/LAT_GRB_analysis/FT2.fits
./data/LAT_GRB_analysis/extended_ltcube.fits
0.025
0.5
###Output
_____no_output_____
###Markdown
**e) Exposure map generation**This time we will specify a livetime cube file:
###Code
%%bash
gtexpmap
./data/LAT_GRB_analysis/extended_mktime.fits
./data/LAT_GRB_analysis/FT2.fits
./data/LAT_GRB_analysis/extended_ltcube.fits
./data/LAT_GRB_analysis/extended_expmap.fits
P8R3_SOURCE_V3
25
100
100
20
###Output
_____no_output_____
###Markdown
**f) Calculating the likelihood**We will use **gtlike** for this analysis. The `plot=yes` command brings up a plot of the fit results; `results=results.dat` saves a copy of the fit results to the file `results.dat`.
###Code
%%bash
gtlike plot=yes results=./data/LAT_GRB_analysis/results.dat
UNBINNED
./data/LAT_GRB_analysis/FT2.fits
./data/LAT_GRB_analysis/extended_mktime.fits
./data/LAT_GRB_analysis/extended_expmap.fits
./data/LAT_GRB_analysis/extended_ltcube.fits
./data/LAT_GRB_analysis/GRB080916C_model.xml
P8R3_SOURCE_V3
MINUIT
###Output
_____no_output_____
###Markdown
LAT Gamma-Ray Burst AnalysisThis procedure provides a step-by-step example of extracting and modeling a LAT Gamma-Ray Burst observation and modeling the prompt and temporally extended emissions using the X-Ray Spectral Fitting Package (**Xspec**) and **gtlike**, respectively. It should be noted that the LAT Low Energy (LLE) data products can also be used for LAT-detected GRBs (see [GRB Analysis Using GTBurst](https://fermidev.gsfc.nasa.gov/ssc/data/analysis/scitools/gtburst.html)). Prerequisites* [gtbin](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtbin.txt)* [gtdiffrsp](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtdiffrsp.txt)* [gtexpmap](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtexpmap.txt)* [gtfindsrc](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtfindsrc.txt)* [gtltcube](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtltcube.txt)* [gtmktime](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtmktime.txt)* [gtrspgen](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtrspgen.txt)* [gtselect](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtselect.txt)* [gtvcut](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtvcut.txt)* XSPEC, used as a spectral analysis tool in Step 3 of this procedure (See [Xanadu Data Analysis for X-Ray Astronomy](http://heasarc.gsfc.nasa.gov/docs/xanadu/).)* The FITS viewer [*fv*](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/heasarc.gsfc.nasa.gov/ftools/fv.html)* The astronomical imaging and data visualization application [*ds9*](http://hea-www.harvard.edu/RD/ds9/) AssumptionsIt is assumed that:* The referenced files reside in your working directory.* You know the time and location of the burst you wish to analyze. Note: For this thread, we will analyze GRB080916C, one of the brightest LAT GRBs on record. The relevant burst properties are: * T0 = 00:12:45.614 UT, 16 September 2008, corresponding to 243216766.614 seconds (MET) * Trigger 243216766 * RA = 121.8 degrees * Dec = -61.3 degrees * You have extracted the files used in this tutorial. You can download them in the code cell below, or you can extract them yourself in the [LAT Data Server](http://fermi.gsfc.nasa.gov/cgi-bin/ssc/LAT/LATDataQuery.cgi) with the following selections:```GRB080916CSearch Center (RA,Dec) = (121.8,-61.3)Radius = 40 degreesStart Time (MET) = 243216266.6 seconds (2011-03-28T00:00:00)Stop Time (MET) = 243218766.6 seconds (2011-04-04T00:00:00)Minimum Energy = 100 MeVMaximum Energy = 300000 MeV``` In this case, the GRB in question is of a sufficiently short duration, e.g. ~10's of seconds, so that the accumulation of LAT background counts is negligible. In order to study delayed emission, e.g. 10's of minutes to ~hour timescales, a likelihood analysis will be required.
###Code
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/latGrbAnalysis/LAT_GRB_analysis.tgz
!mkdir data
!mv LAT_GRB_analysis.tgz ./data
!tar -xzvf ./data/LAT_GRB_analysis.tgz -C ./data
###Output
x ./._LAT_GRB_analysis
x LAT_GRB_analysis/
x LAT_GRB_analysis/._080916C_LAT.pha
x LAT_GRB_analysis/080916C_LAT.pha
x LAT_GRB_analysis/._080916C_LAT.rsp
x LAT_GRB_analysis/080916C_LAT.rsp
x LAT_GRB_analysis/._cmap_zmax100.fits
x LAT_GRB_analysis/cmap_zmax100.fits
x LAT_GRB_analysis/._extended_expmap.fits
x LAT_GRB_analysis/extended_expmap.fits
x LAT_GRB_analysis/._extended_ltcube.fits
x LAT_GRB_analysis/extended_ltcube.fits
x LAT_GRB_analysis/._extended_mktime.fits
x LAT_GRB_analysis/extended_mktime.fits
x LAT_GRB_analysis/._extended_select.fits
x LAT_GRB_analysis/extended_select.fits
x LAT_GRB_analysis/._filtered_zmax100.fits
x LAT_GRB_analysis/filtered_zmax100.fits
x LAT_GRB_analysis/._FT2.fits
x LAT_GRB_analysis/FT2.fits
x LAT_GRB_analysis/._glg_cspec_n0_bn080916009_v07.rsp
x LAT_GRB_analysis/glg_cspec_n0_bn080916009_v07.rsp
x LAT_GRB_analysis/._glg_tte_n0_bn080916009_v01.fit
x LAT_GRB_analysis/glg_tte_n0_bn080916009_v01.fit
x LAT_GRB_analysis/._GRB080916C_model.xml
x LAT_GRB_analysis/GRB080916C_model.xml
x LAT_GRB_analysis/._L1506171634094365357F22_EV00.fits
x LAT_GRB_analysis/L1506171634094365357F22_EV00.fits
x LAT_GRB_analysis/._L1506171634094365357F22_SC00.fits
x LAT_GRB_analysis/L1506171634094365357F22_SC00.fits
x LAT_GRB_analysis/._lc_zmax100.fits
x LAT_GRB_analysis/lc_zmax100.fits
x LAT_GRB_analysis/._localize_zmax100.fits
x LAT_GRB_analysis/localize_zmax100.fits
x LAT_GRB_analysis/._prompt_expmap.fits
x LAT_GRB_analysis/prompt_expmap.fits
x LAT_GRB_analysis/._prompt_select.fits
x LAT_GRB_analysis/prompt_select.fits
x LAT_GRB_analysis/._results.dat
x LAT_GRB_analysis/results.dat
###Markdown
Steps:1. [Localize the GRB.](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/lat_grb_analysis.htmlTS)2. [Generating the analysis files.](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/lat_grb_analysis.htmlFILESGEN)3. [Binned analysis with XSPEC (prompt emission).](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/lat_grb_analysis.htmlXSPEC)4. [Unbinned analysis using gtlike (extended emission).](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/lat_grb_analysis.htmlGTLIKE)**NOTE**: During the analysis of the prompt emission (Steps 1 to 3) we will make use of the `P8R3_TRANSIENT020_V2` [response function](http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_LAT_IRFs/IRF_overview.html), while in the analysis of the extended emission (Step 4) the `P8R3_SOURCE_V2` response function will be used. 1. Localize the GRB**a) Select LAT data during prompt burst phase**This can either be done using a time interval ascertained from data from other instruments (e.g., using the GBM trigger time and T90 values reported in the [Fermi/GBM circular](http://gcn.gsfc.nasa.gov/gcn3/8245.gcn3)), or it can be estimated directly from the LAT light curve. Open the light curve `lc_zmax100.fits` with [*fv*](http://heasarc.nasa.gov/ftools/fv/):
###Code
!fv ./data/LAT_GRB_analysis/lc_zmax100.fits
###Output
/bin/sh: fv: command not found
###Markdown
You should get something that looks like this: Here, we have plotted TIME-243216766 on the x-axis (with TIMEDEL as error) and COUNTS on the y-axis (with ERROR as error). Hovering the cursor over the plot will yield its x-y coordinates; in this case, a plausible estimate of the LAT emission interval is (T0, T0+40s).We run **gtselect** to extract the data for this time interval.Remember to set `evclass=16` on the command line to ensure that we retain the transient class events:
###Code
%%bash
gtselect evclass=16
./data/LAT_GRB_analysis/filtered_zmax100.fits
./data/LAT_GRB_analysis/localize_zmax100.fits
INDEF
INDEF
15
243216766
243216806
100
300000
100
###Output
Input FT1 file[./data/LAT_GRB_analysis/FT1.fits] ./data/LAT_GRB_analysis/fil
tered_zmax100.fits
Output FT1 file[./data/LAT_GRB_analysis/extended_select.fits] ./data/LAT_GRB
_analysis/localize_zmax100.fits
RA for new search center (degrees) (0:360) [119.861] INDEF
Dec for new search center (degrees) (-90:90) [-56.581] INDEF
radius of new search region (degrees) (0:180) [15] 15
start time (MET in s) (0:) [243216806] 243216766
end time (MET in s) (0:) [243217166] 243216806
lower energy limit (MeV) (0:) [100] 100
upper energy limit (MeV) (0:) [300000] 300000
maximum zenith angle value (degrees) (0:180) [100] 100
Done.
###Markdown
Note that we have also reduced the acceptance cone to 15 degrees to filter out non-burst photons. **b) Run the localization tools, gtfindsrc and gtbin**If the data are essentially background-free as is the case here with a burst duration of ~50 sec, one can run the localization tools **gtfindsrc** and **gtbin** directly on the FT1 file (obtained when downloading the data file from the FSSC LAT Data server).**gtfindsrc** is necessary to centroid the GRB. For longer intervals where the background is significant, we can model the instrumental and celestial backgrounds using diffuse model components. For these data, the integration time is about 40 seconds so the diffuse and instrumental background components will make a negligible contribution to the total counts, so we proceed assuming they are negligible.We run **gtfindsrc** first to find the local maximum of the log-likelihood of a point source model as well as an estimate of the error radius. We will use this information to specify the size of the TS map in order to ensure that it contains the error circles we desire.
###Code
%%bash
gtfindsrc
./data/LAT_GRB_analysis/localize_zmax100.fits
./data/LAT_GRB_analysis/L1506171634094365357F22_SC00.fits
./data/LAT_GRB_analysis/GRB080916C_gtfindsrc.txt
P8R3_TRANSIENT020_V2
none
none
none
CEL
121.8
-61.3
MINUIT
1e-2
0.01
###Output
Event file[] ./data/LAT_GRB_analysis/localize_zmax100.fits
Spacecraft file[] ./data/LAT_GRB_analysis/L1506171634094365357F22_SC00.fits
Output file for trial points[] ./data/LAT_GRB_analysis/GRB080916C_gtfindsrc.
txt
Response functions to use[CALDB] P8R3_TRANSIENT020_V2
Livetime cube file[none] none
Unbinned exposure map[none] none
Source model file[none] none
Target source name[]
Source ' ' not found in source model.
Enter coordinates for test source:
Coordinate system (CEL|GAL) [CEL] CEL
Intial source Right Ascension (deg) (-360:360) [0] 121.8
Initial source Declination (deg) (-90:90) [0] -61.3
Optimizer (DRMNFB|NEWMINUIT|MINUIT|DRMNGB|LBFGS) [MINUIT] MINUIT
Tolerance for -log(Likelihood) at each trial point[1e-2] 1e-2
Covergence tolerance for positional fit[0.01] 0.01
Best fit position: 119.889, -56.6719
Error circle radius: 0.0663046
###Markdown
In this example of running **gtfindsrc**, the `FT2.fits` file was the renamed spacecraft data file downloaded from the FSSC LAT Data server.Since our source model comprises only a point source to represent the signal from the GRB, we do not provide a source model file or a target source name.Similarly, since the exposure map is used for diffuse components, we do not need to provide an unbinned exposure map. Use of a livetime cube will make the point source exposure calculation faster, but for integrations less than 1000 s, it is generally not needed. We have now obtained a position of maximum likelihood; we will use (119.861, -56.581) as our burst location from now on. It should be noted that GRB080916C is an exceptionally bright event in the LAT, and centroiding it with **gtfindsrc** is fast and adequate. In many other cases, a GRB may have far fewer LAT counts and the creation of a counts map using **gtbin** will be useful in localizing it:
###Code
%%bash
gtbin
CMAP
./data/LAT_GRB_analysis/localize_zmax100.fits
./data/LAT_GRB_analysis/GRB080916C_counts_map.fits
NONE
30
30
0.2
CEL
119.861
-56.581
0
AIT
###Output
This is gtbin version HEAD
Type of output file (CCUBE|CMAP|LC|PHA1|PHA2|HEALPIX) [PHA1] CMAP
Event data file name[./data/LAT_GRB_analysis/prompt_select.fits] ./data/LAT_
GRB_analysis/localize_zmax100.fits
Output file name[./data/LAT_GRB_analysis/080916C_LAT.pha] ./data/LAT_GRB_ana
lysis/GRB080916C_counts_map.fits
Spacecraft data file name[./data/LAT_GRB_analysis/FT2.fits] NONE
Size of the X axis in pixels[30] 30
Size of the Y axis in pixels[30] 30
Image scale (in degrees/pixel)[0.2] 0.2
Coordinate system (CEL - celestial, GAL -galactic) (CEL|GAL) [CEL] CEL
First coordinate of image center in degrees (RA or galactic l)[119.861] 119.
861
Second coordinate of image center in degrees (DEC or galactic b)[-56.581] -5
6.581
Rotation angle of image axis, in degrees[0] 0
Projection method e.g. AIT|ARC|CAR|GLS|MER|NCP|SIN|STG|TAN:[AIT] AIT
###Markdown
We can now view the counts map in *ds9*:
###Code
!ds9 ./data/LAT_GRB_analysis/GRB080916C_counts_map.fits
###Output
_____no_output_____
###Markdown
The counts map should look something like this: 2. Generating the analysis filesIn this subsection, we'll use the same data we extracted as for the localization analysis above.The purpose is to illustrate the steps necessary to model a GRB that is significantly fainter than GRB080916C; i.e., one for which the residual and diffuse backgrounds need to be modeled. This means that we will include diffuse components in the model definition and that will necessitate the exposure map calculation in order for the code to compute the predicted number of events. We'll see from the fit to the data that these diffuse components do indeed provide a negligible contribution to the overall counts for this burst. **a) Data subselection**Rerun **gtselect** with (119.861, -56.581) as the new search center:
###Code
%%bash
gtselect evclass=16
./data/LAT_GRB_analysis/filtered_zmax100.fits
./data/LAT_GRB_analysis/prompt_select.fits
119.861
-56.581
15
243216766
243216806
100
300000
100
###Output
Input FT1 file[./data/LAT_GRB_analysis/filtered_zmax100.fits] ./data/LAT_GRB
_analysis/filtered_zmax100.fits
Output FT1 file[./data/LAT_GRB_analysis/localize_zmax100.fits] ./data/LAT_GR
B_analysis/prompt_select.fits
RA for new search center (degrees) (0:360) [INDEF] 119.861
Dec for new search center (degrees) (-90:90) [INDEF] -56.581
radius of new search region (degrees) (0:180) [15] 15
start time (MET in s) (0:) [243216766] 243216766
end time (MET in s) (0:) [243216806] 243216806
lower energy limit (MeV) (0:) [100] 100
upper energy limit (MeV) (0:) [300000] 300000
maximum zenith angle value (degrees) (0:180) [100] 100
Done.
###Markdown
**b) Model definition**The model will include a point source at the GRB location, an isotropic component (to represent the extragalactic diffuse and/or the residual background), and a Galactic diffuse component that uses the recommend Galactic diffuse model, `gal_2yearp7v6_v0.fits`. This file is available at the [LAT background models](http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html) page via the [FSSC Data Access](http://fermi.gsfc.nasa.gov/ssc/data/access/) page.The easiest way to generate a simple 3 component model like this would be to use the [modeleditor](http://www.slac.stanford.edu/exp/glast/wb/prod/pages/sciTools_modeleditor/modelEditor.html) program (included in the [Fermitools](http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/)) by typing `ModelEditor` at the prompt. Here, we have added three sources to our model:1. GRB_080916C (you can rename the source by typing into the "Source Name:" text input box), with a PowerLaw2 spectrum. (The [Model Selection](http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Likelihood/Model_Selection.html) page of the Cicerone lists the possible spectral models.) We have adjusted the Lower Limit of its spectrum to be 100.0. We have also inputted the RA and Dec (calculated from gtfindsrc) into its spatial model. We have kept all other default values.2. GALPROP Diffuse (there is a specific option for this in the "Source" menu). Edit the `File:` entry of the spatial model to point to your local copy of `gll_iem_v06.fits`. We have kept all other defaults.3. Extragalactic Diffuse (there is a specific option for this). We have kept all the default values.If our analysis region had been close to any known LAT sources, we would have had to include them in our model (see this [tutorial](http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/likelihood_tutorial.htmlcreateSourceModel)). The xml file `GRB080916C_model.xml` should look like this:```xml ``` You can also create and edit model files by hand rather than use the modeleditor so long as the sources have the correct formats. For your convenience, you can create a local copy of the xml by running the python script below.
###Code
with open('./data/LAT_GRB_analysis/GRB080916C_model.xml', 'w') as file:
file.write("""<?xml version="1.0" ?>
<source_library title="Source Library" xmlns="http://fermi.gsfc.nasa.gov/source_library">
<source name="GRB_080916C" type="PointSource">
<spectrum type="PowerLaw2">
<parameter free="true" max="1000.0" min="1e-05" name="Integral" scale="1e-06" value="1.0"/>
<parameter free="true" max="-1.0" min="-5.0" name="Index" scale="1.0" value="-2.0"/>
<parameter free="false" max="200000.0" min="20.0" name="LowerLimit" scale="1.0" value="20.0"/>
<parameter free="false" max="200000.0" min="20.0" name="UpperLimit" scale="1.0" value="200000.0"/>
</spectrum>
<spatialModel type="SkyDirFunction">
<parameter free="false" max="360.0" min="0.0" name="RA" scale="1.0" value="119.861"/>
<parameter free="false" max="90.0" min="-90.0" name="DEC" scale="1.0" value="-56.581"/>
</spatialModel>
</source>
<source name="GALPROP Diffuse Source" type="DiffuseSource">
<spectrum type="ConstantValue">
<parameter free="true" max="10.0" min="0.0" name="Value" scale="1.0" value="1.0"/>
</spectrum>
<spatialModel file="$(FERMI_DIR)/refdata/fermi/galdiffuse/gll_iem_v06.fits" type="MapCubeFunction">
<parameter free="false" max="1000.0" min="0.001" name="Normalization" scale="1.0" value="1.0"/>
</spatialModel>
</source>
<source name="Extragalactic Diffuse Source" type="DiffuseSource">
<spectrum type="PowerLaw">
<parameter free="true" max="100.0" min="1e-05" name="Prefactor" scale="1e-07" value="1.6"/>
<parameter free="false" max="-1.0" min="-3.5" name="Index" scale="1.0" value="-2.1"/>
<parameter free="false" max="200.0" min="50.0" name="Scale" scale="1.0" value="100.0"/>
</spectrum>
<spatialModel type="ConstantValue">
<parameter free="false" max="10.0" min="0.0" name="Value" scale="1.0" value="1.0"/>
</spatialModel>
</source>
</source_library>""")
###Output
_____no_output_____
###Markdown
**c) Refining the good time intervals (GTIs)**In general, our next step would be to run **gtmktime** to remove the time intervals whose events fell outside of our zenith angle cut and apply temporal cuts to the data based on the spacecraft file (`FT2.fits`). However, as our data encompasses a short period of time, this step is inappropriate in this case (**gtmktime** will report errors).It would be necessary if were analyzing a longer period of time such as a longer burst, or extended emission as at the end of this thread (see the [Likelihood Tutorial](http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/likelihood_tutorial.html) for more information).Also, if we use **gtvcut** to review the file `prompt_select.fits`, we can see that the GTIs span the entire time selection we have made. **d) Diffuse response calculation**Since we are dealing with `evclass=16` (transient class) events, we need to run the **gtdiffrsp** tool.For each diffuse component in the model, the **gtdiffrsp** tool populates the `DIFRSP0` and `DIFRSP1` columns. They contain the integral over the source extent (for the Galactic and isotropic components this is essentially the entire sky) of the source intensity spatial distribution times the PSF and effective area. It computes the counts model density of the various diffuse components at each measured photon location, arrival time, and energy, and this information is used in maximizing the likelihood computation. This integral is also computed for the point sources in the model, but since those sources are delta-functions in sky position, the spatial part of the integral is trivial.Note that the large size of the [new Galactic diffuse background model](http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html) makes this a very resource-intensive process.
###Code
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/aux/gll_iem_v06.fits
!mv gll_iem_v06.fits $FERMI_DIR/refdata/fermi/galdiffuse
%%bash
gtdiffrsp
./data/LAT_GRB_analysis/prompt_select.fits
./data/LAT_GRB_analysis/FT2.fits
./data/LAT_GRB_analysis/GRB080916C_model.xml
P8R3_TRANSIENT020_V2
###Output
Event data file[./data/LAT_GRB_analysis/extended_mktime.fits] ./data/LAT_GRB
_analysis/prompt_select.fits
Spacecraft data file[./data/LAT_GRB_analysis/FT2.fits] ./data/LAT_GRB_analys
is/FT2.fits
Source model file[./data/LAT_GRB_analysis/GRB080916C_model.xml] ./data/LAT_G
RB_analysis/GRB080916C_model.xml
Response functions to use[P8R3_SOURCE_V2] P8R3_TRANSIENT020_V2
adding source Extragalactic Diffuse Source
adding source GALPROP Diffuse Source
###Markdown
As mentioned before, **gtdiffrsp** modifies the input file by adding values to the `DIFRSP0` and `DIFRSP1` columns. In the tar file, for comparison purposes, the user can find two copies of the input file, one used as input of **gtdiffrsp** (named `prompt_select_pre_gtdiffrsp.fits`) and one obtained after running with **gtdiffrsp** and with the columns modified (named `prompt_select.fits`). **e) Livetime cube generation**For analysis of longer time intervals, we would need to run **gtltcube** to calculate a livetime cube. For this analysis, this step is unnecessary due to the short timescales involved. **f) Exposure map generation**We now use **gtexpmap** to generate the [exposure map](http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Data_Exploration/livetime_and_exposure.html). Note that the exposure maps from this tool are intended for use with **unbinned likelihood analysis only**:
###Code
%%bash
gtexpmap
./data/LAT_GRB_analysis/prompt_select.fits
./data/LAT_GRB_analysis/FT2.fits
none
./data/LAT_GRB_analysis/prompt_expmap.fits
P8R3_TRANSIENT020_V2
25
100
100
20
###Output
Event data file[./data/LAT_GRB_analysis/extended_mktime.fits] ./data/LAT_GRB
_analysis/prompt_select.fits
Spacecraft data file[./data/LAT_GRB_analysis/FT2.fits] ./data/LAT_GRB_analys
is/FT2.fits
Exposure hypercube file[./data/LAT_GRB_analysis/extended_ltcube.fits] none
output file name[./data/LAT_GRB_analysis/extended_expmap.fits] ./data/LAT_GR
B_analysis/prompt_expmap.fits
Response functions[P8R3_SOURCE_V2] P8R3_TRANSIENT020_V2
Radius of the source region (in degrees)[25] 25
Number of longitude points (2:1000) [100] 100
Number of latitude points (2:1000) [100] 100
Number of energies (2:100) [20] 20
Computing the ExposureMap (no expCube file given)
###Markdown
The radius of the source region should be larger than the extraction region in the FT1 data in order to account for PSF tail contributions of sources just outside the extraction region.For energies down to 100 MeV, a 10 degree buffer is recommended (i.e., the total radius is the sum of the extraction radius and the buffer area, totaling 25 in our case); for higher energy lower bounds, e.g., 1 GeV, 5 degrees or less is acceptable. Again, note that we did not provide an "exposure hypercube" (the livetime cube) file.For data durations less than about 1ks, **gtexpmap** will execute faster doing the time integration over the livetimes in the FT2 file directly. For longer integrations, computing the livetime cube with **gtltcube** will be faster (more information can be found in the [Explore LAT Data section](http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/explore_latdata.html)). At this step, the flux and spectral shape of the GRB prompt emission can be estimated using the **gtlike** tool (see section 4f). 3. Binned analysis with XSPEC (prompt emission)We will now perform a spectral analysis on the prompt emission using XSPEC. (A basic knowledge of the use of XSPEC is assumed.)This requires a `PHA` (spectral) file and a `RSP` (response) file. It should be noted that as an alternative to XSPEC, the RMFIT software (available as a user contribution) can be used for spectral modeling; however, it is not distributed as part of the Fermitools. **a) Generating PHA and RSP files**We use **gtbin** to create the `PHA1` file (the choice of `PHA1` for `Type of output file` indicates that you want to create a `PHA` file — the standard FITS file containing a single binned spectrum — spanning the entire time range):
###Code
%%bash
gtbin
PHA1
./data/LAT_GRB_analysis/prompt_select.fits
./data/LAT_GRB_analysis/080916C_LAT.pha
./data/LAT_GRB_analysis/FT2.fits
LOG
100
300000
30
###Output
This is gtbin version HEAD
Type of output file (CCUBE|CMAP|LC|PHA1|PHA2|HEALPIX) [CMAP] PHA1
Event data file name[./data/LAT_GRB_analysis/localize_zmax100.fits] ./data/L
AT_GRB_analysis/prompt_select.fits
Output file name[./data/LAT_GRB_analysis/GRB080916C_counts_map.fits] ./data/
LAT_GRB_analysis/080916C_LAT.pha
Spacecraft data file name[NONE] ./data/LAT_GRB_analysis/FT2.fits
Algorithm for defining energy bins (FILE|LIN|LOG) [LOG] LOG
Start value for first energy bin in MeV[100] 100
Stop value for last energy bin in MeV[300000] 300000
Number of logarithmically uniform energy bins[30] 30
###Markdown
The **gtrspgen** tool is then run to generate an XSPEC-compatible response matrix from the LAT IRFs.
###Code
%%bash
gtrspgen
PS
./data/LAT_GRB_analysis/080916C_LAT.pha
./data/LAT_GRB_analysis/FT2.fits
./data/LAT_GRB_analysis/080916C_LAT.rsp
90
0.5
CALDB
LOG
100
300000
100
###Output
This is gtrspgen version HEAD
Response calculation method (GRB|PS) [GRB] PS
Spectrum file name[] ./data/LAT_GRB_analysis/080916C_LAT.pha
Spacecraft data file name[] ./data/LAT_GRB_analysis/FT2.fits
Output file name[] ./data/LAT_GRB_analysis/080916C_LAT.rsp
Cutoff angle for binning SC pointings (degrees)[60.] 90
Size of bins for binning SC pointings (cos(theta))[.05] 0.5
Response function to use, Handoff|DC2|DC2A|DC2FA|DC2BA|DC2FB etc[P6_V3_DIFFUSE]
CALDB
Algorithm for defining true energy bins (FILE|LIN|LOG) [LOG] LOG
Start value for first energy bin in MeV[30.] 100
Stop value for last energy bin in MeV[200000.] 300000
Number of logarithmically uniform energy bins[100] 100
###Markdown
**Notes**:* One should always use the `PS` response calculation method despite the option of using `GRB`. The latter was a method used in the early stages of the software creation but was later never fully developed. Ultimately, the `PS` method should always be more accurate, in particular for longer bursts. For short bursts, the difference in results and execution time between `PS` and `GRB` is negligible.* In **gtrspgen** you choose the incident photon energy bins; i.e., the energy bins over which the incident photon model is computed. **gtrspgen** reads the output photon channel energy grid from the PHA file. The RSP created by **gtrspgen** is the mapping from the incident photon energy bins into the output photon channels. These incident photon energy bins need not be the same as the output channels and they should generally over-sample them: * If there are only a few channels then the calculation of the expected number of photons in each channel will be more accurate if there are more incident photon energy bins. * You might want to include some incident photon energy bins above and below the range of channels to account for the LAT's finite energy resolution. Incident energy bins above the highest channel energy is particularly important if some for the photon's energy leaks out of the detector. **b) Backgrounds**For the prompt emission of GRB 080916C (and most LAT bursts), there is minimal background contamination. For analyses of longer integrations, one can estimate the background using off-source regions as for more traditional X-ray analyses. **c) Running XSPEC**You now have the two files necessary to analyze the burst spectrum with XSPEC:* A PHA file with the spectrum.* A RSP file with the response function.Note that there is no background file. All non-burst sources are expected to produce less than 1 photon in the extraction region during the burst! Here we provide the simplest example of fitting a spectrum with XSPEC; for further details you should consult the [XSPEC manual](http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/). 1. Start XSPEC**Note**: The default version is now release 12 (XSPEC12). 2. Load in the data: ```%%bash>>xspecdata ./data/080916C_LAT.pha``` When you specify a data file, XSPEC will try to load the response file in the PHA file's header. Alternatively, you can specify the response file separately with the command `response 080916C_LAT.rsp`.We now load in a power law model for fitting the data. For more information on available models, see [this example](http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/xspec11/manual/node26.html). 3. Load the model: ```%%bash>>xspecmodel pow``` 4. Set XSPEC to plot the data and to select the statistical method for fitting: ```bash>>xspeccpd /xssetplot energyplot ldata chistatistic cstat``` The `cpd` command sets the current plotting device, which in this case is the `xserve` option (an xwindow that persists after XSPEC has been closed).The next two commands tell XSPEC to create a logarithmic (the "l" of `ldata`) plot of the energy (along the x-axis), using the data file specified before, with the fit statistic. (Consult the [manual](http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/xspec11/manual/node26.html) for another example.)It is important to note that, for LAT GRB analysis, we generally want to use the C-statistic instead of chi-squared due to the small number of counts. (However, the command for plotting is still `chi` or `chisq` regardless of the statistic used.) We have set this in the last step. 5. Perform a fit and plot the results: ```%%bash>>xspecfitplot ldata residplot ldata ratio``` They should all be invoked in the same xspec instance, so combining all of the steps above will yield:
###Code
%%bash
#For ldata resid
xspec
data ./data/LAT_GRB_analysis/080916C_LAT.pha
model pow
cpd /xs
setplot energy
plot ldata chi
statistic cstat
fit
plot ldata resid
###Output
bash: line 2: xspec: command not found
bash: line 3: data: command not found
bash: line 4: model: command not found
bash: line 7: cpd: command not found
bash: line 8: setplot: command not found
bash: line 9: plot: command not found
bash: line 10: statistic: command not found
bash: line 11: fit: command not found
bash: line 12: plot: command not found
###Markdown
This will give you something that looks like:
###Code
%%bash
# For ldata ratio
xspec
data ./data/LAT_GRB_analysis/080916C_LAT.pha
model pow
cpd /xs
setplot energy
plot ldata chi
statistic cstat
fit
plot ldata ratio
###Output
bash: line 2: xspec: command not found
bash: line 3: data: command not found
bash: line 4: model: command not found
bash: line 7: cpd: command not found
bash: line 8: setplot: command not found
bash: line 9: plot: command not found
bash: line 10: statistic: command not found
bash: line 11: fit: command not found
bash: line 12: plot: command not found
###Markdown
And this will give you something that looks like: 4. Unbinned analysis using gtlike (temporally expanded emission)**a) Data subselection**Here, we will search for emission which may occur after the prompt GRB event; temporally extended high-energy emission has been detected in a large number of LAT bursts. We rerun **gtselect** on a time interval of ~40 to 400 seconds after the trigger on the file downloaded from the archive (i.e. the EV file) and renamed `FT1.fits`, choosing to [exclude "transient"](http://fermi.gsfc.nasa.gov/ssc/data/analysis/LAT_caveats.html) class photons for the analysis of extended emission. (A longer interval has been chosen to demonstrate **gtmktime**, **gtltcube**, etc.)Remember to set `evclass=128` on the command line to ensure that we use the source class events.
###Code
# Make a copy of the EV file and rename it to FT1.fits.
!cp ./data/LAT_GRB_analysis/L1506171634094365357F22_EV00.fits ./data/LAT_GRB_analysis/FT1.fits
%%bash
gtselect evclass=128
./data/LAT_GRB_analysis/FT1.fits
./data/LAT_GRB_analysis/extended_select.fits
119.861
-56.581
15
243216806
243217166
100
300000
100
###Output
Input FT1 file[./data/LAT_GRB_analysis/filtered_zmax100.fits] ./data/LAT_GRB
_analysis/FT1.fits
Output FT1 file[./data/LAT_GRB_analysis/prompt_select.fits] ./data/LAT_GRB_a
nalysis/extended_select.fits
RA for new search center (degrees) (0:360) [119.861] 119.861
Dec for new search center (degrees) (-90:90) [-56.581] -56.581
radius of new search region (degrees) (0:180) [15] 15
start time (MET in s) (0:) [243216766] 243216806
end time (MET in s) (0:) [243216806] 243217166
lower energy limit (MeV) (0:) [100] 100
upper energy limit (MeV) (0:) [300000] 300000
maximum zenith angle value (degrees) (0:180) [100] 100
Done.
###Markdown
**b) Refining the GTIs**Since our subselection encompasses a longer period of time, we run gtmktime to exclude bad time intervals with the filter expression suggested in the [Cicerone](https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/):
###Code
%%bash
gtmktime
./data/LAT_GRB_analysis/FT2.fits
(DATA_QUAL>0)&&(LAT_CONFIG==1)
yes
./data/LAT_GRB_analysis/extended_select.fits
./data/LAT_GRB_analysis/extended_mktime.fits
###Output
Spacecraft data file[./data/LAT_GRB_analysis/FT2.fits] ./data/LAT_GRB_analys
is/FT2.fits
Filter expression[(DATA_QUAL>0)&&(LAT_CONFIG==1)] (DATA_QUAL>0)&&(LAT_CONFIG
==1)
Apply ROI-based zenith angle cut[yes] yes
Event data file[./data/LAT_GRB_analysis/extended_select.fits] ./data/LAT_GRB
_analysis/extended_select.fits
Output event file name[./data/LAT_GRB_analysis/extended_mktime.fits] ./data/
LAT_GRB_analysis/extended_mktime.fits
###Markdown
Note: In an analysis of *transient* class events, we set the data quality portion of the filter expression to `DATA_QUAL>0` to retain these events. **c) Diffuse response calculation**We run now **gtdiffrsp**, making sure to use the correct response function.Again, note that the pass 8 Galactic diffuse background model causes this to be very resource-intensive. The tool modifies the input event data file, inserting values in the `DIFRSP0` and `DIFRSP1` columns.
###Code
%%bash
gtdiffrsp
./data/LAT_GRB_analysis/extended_mktime.fits
./data/LAT_GRB_analysis/FT2.fits
./data/LAT_GRB_analysis/GRB080916C_model.xml
P8R3_SOURCE_V2
###Output
Event data file[./data/LAT_GRB_analysis/prompt_select.fits] ./data/LAT_GRB_a
nalysis/extended_mktime.fits
Spacecraft data file[./data/LAT_GRB_analysis/FT2.fits] ./data/LAT_GRB_analys
is/FT2.fits
Source model file[./data/LAT_GRB_analysis/GRB080916C_model.xml] ./data/LAT_G
RB_analysis/GRB080916C_model.xml
Response functions to use[P8R3_TRANSIENT020_V2] P8R3_SOURCE_V2
adding source Extragalactic Diffuse Source
adding source GALPROP Diffuse Source
###Markdown
**d) Livetime cube generation**Now that our data file encompasses a longer period of time, it requires us to calculate the livetime cube using **gtltcube**:
###Code
%%bash
gtltcube
./data/LAT_GRB_analysis/extended_mktime.fits
./data/LAT_GRB_analysis/FT2.fits
./data/LAT_GRB_analysis/extended_ltcube.fits
0.025
0.5
###Output
Event data file[./data/LAT_GRB_analysis/extended_mktime.fits] ./data/LAT_GRB
_analysis/extended_mktime.fits
Spacecraft data file[./data/LAT_GRB_analysis/FT2.fits] ./data/LAT_GRB_analys
is/FT2.fits
Output file[./data/LAT_GRB_analysis/extended_ltcube.fits] ./data/LAT_GRB_ana
lysis/extended_ltcube.fits
Step size in cos(theta) (0.:1.) [0.025] 0.025
Pixel size (degrees)[0.5] 0.5
###Markdown
**e) Exposure map generation**This time we will specify a livetime cube file:
###Code
%%bash
gtexpmap
./data/LAT_GRB_analysis/extended_mktime.fits
./data/LAT_GRB_analysis/FT2.fits
./data/LAT_GRB_analysis/extended_ltcube.fits
./data/LAT_GRB_analysis/extended_expmap.fits
P8R3_SOURCE_V2
25
100
100
20
###Output
Event data file[./data/LAT_GRB_analysis/prompt_select.fits] ./data/LAT_GRB_a
nalysis/extended_mktime.fits
Spacecraft data file[./data/LAT_GRB_analysis/FT2.fits] ./data/LAT_GRB_analys
is/FT2.fits
Exposure hypercube file[none] ./data/LAT_GRB_analysis/extended_ltcube.fits
output file name[./data/LAT_GRB_analysis/prompt_expmap.fits] ./data/LAT_GRB_
analysis/extended_expmap.fits
Response functions[P8R3_TRANSIENT020_V2] P8R3_SOURCE_V2
Radius of the source region (in degrees)[25] 25
Number of longitude points (2:1000) [100] 100
Number of latitude points (2:1000) [100] 100
Number of energies (2:100) [20] 20
Computing the ExposureMap using ./data/LAT_GRB_analysis/extended_ltcube.fits
###Markdown
**f) Calculating the likelihood**We will use **gtlike** for this analysis. The `plot=yes` command brings up a plot of the fit results; `results=results.dat` saves a copy of the fit results to the file `results.dat`.
###Code
%%bash
gtlike plot=yes results=./data/LAT_GRB_analysis/results.dat
UNBINNED
./data/LAT_GRB_analysis/FT2.fits
./data/LAT_GRB_analysis/extended_mktime.fits
./data/LAT_GRB_analysis/extended_expmap.fits
./data/LAT_GRB_analysis/extended_ltcube.fits
./data/LAT_GRB_analysis/GRB080916C_model.xml
P8R3_SOURCE_V2
MINUIT
###Output
Statistic to use (BINNED|UNBINNED) [UNBINNED] UNBINNED
Spacecraft file[none] ./data/LAT_GRB_analysis/FT2.fits
Event file[none] ./data/LAT_GRB_analysis/extended_mktime.fits
Unbinned exposure map[none] ./data/LAT_GRB_analysis/extended_expmap.fits
Exposure hypercube file[none] ./data/LAT_GRB_analysis/extended_ltcube.fits
Source model file[] ./data/LAT_GRB_analysis/GRB080916C_model.xml
Response functions to use[CALDB] P8R3_SOURCE_V2
Optimizer (DRMNFB|NEWMINUIT|MINUIT|DRMNGB|LBFGS) [MINUIT] MINUIT
**********
** 1 **SET PRINT .000
**********
**********
** 2 **SET NOWARN
**********
PARAMETER DEFINITIONS:
NO. NAME VALUE STEP SIZE LIMITS
1 'Prefactor ' 1.6000 1.0000 .10000E-04 100.00
2 'Value ' 1.0000 1.0000 .0000 10.000
3 'Integral ' 1.0000 1.0000 .10000E-04 1000.0
4 'Index ' -2.0000 1.0000 -5.0000 -1.0000
**********
** 3 **SET ERR .5000
**********
**********
** 4 **SET GRAD 1.000
**********
**********
** 5 **MINIMIZE 800.0 2.000
**********
MIGRAD MINIMIZATION HAS CONVERGED.
MIGRAD WILL VERIFY CONVERGENCE AND ERROR MATRIX.
FCN= 612.1321 FROM MIGRAD STATUS=CONVERGED 70 CALLS 71 TOTAL
EDM= .80E-04 STRATEGY= 1 ERROR MATRIX ACCURATE
EXT PARAMETER STEP FIRST
NO. NAME VALUE ERROR SIZE DERIVATIVE
1 Prefactor .21271E-04 .99149 .84415E-01** at limit **
2 Value 2.1292 .42566 .46267E-01 .37155E-01
3 Integral 321.48 90.971 .43760E-01 .81086E-01
4 Index -2.0185 .12374 .16695E-01 .15893
ERR DEF= .500
Final values:
Prefactor = 2.12712e-05
Value = 2.12921
Integral = 321.485
Index = -2.01851
**********
** 6 **HESSE
**********
FCN= 612.1321 FROM HESSE STATUS=OK 23 CALLS 94 TOTAL
EDM= .73E-04 STRATEGY= 1 ERROR MATRIX ACCURATE
EXT PARAMETER INTERNAL INTERNAL
NO. NAME VALUE ERROR STEP SIZE VALUE
1 Prefactor .21271E-04 1.0063 .24006E-02 -1.5715
WARNING - - ABOVE PARAMETER IS AT LIMIT.
2 Value 2.1292 .42399 .24968E-03 5.6716
3 Integral 321.48 79.580 .20312E-03 -.36509
4 Index -2.0185 .10807 .73783E-04 19.363
ERR DEF= .500
Minuit fit quality: 3 estimated distance: 7.2751e-05
Minuit parameter uncertainties:
1 0.00674714
2 0.42475
3 79.9698
4 0.108139
Computing TS values for each source (3 total)
Photon fluxes are computed for the energy range 100 to 300000 MeV
Extragalactic Diffuse Source:
Prefactor: 2.12712e-05 +/- 0.00674714
Index: -2.1
Scale: 100
Npred: 4.35898e-05
Flux: 2.43125e-09 +/- 7.70682e-07 photons/cm^2/s
GALPROP Diffuse Source:
Value: 2.12921 +/- 0.42475
Npred: 28.6587
Flux: 0.0010404 +/- 0.00020753 photons/cm^2/s
GRB_080916C:
Integral: 321.485 +/- 79.9698
Index: -2.01851 +/- 0.108139
LowerLimit: 20
UpperLimit: 200000
Npred: 70.4164
ROI distance: 0
TS value: 451.168
Flux: 6.24327e-05 +/- 8.09301e-06 photons/cm^2/s
Total number of observed counts: 99
Total number of model events: 99.0752
-log(Likelihood): 612.1321133
Elapsed CPU time: 22.44168
|
python-tuts/1-intermediate/04 - Iteration tools/Project /Project - Description.ipynb | ###Markdown
Project For this project you have 4 files containing information about persons.The files are:* `personal_info.csv` - personal information such as name, gender, etc. (one row per person)* `vehicles.csv` - what vehicle people own (one row per person)* `employment.csv` - where a person is employed (one row per person)* `update_status.csv` - when the person's data was created and last updatedEach file contains a key, `SSN`, which **uniquely** identifies a person.This key is present in **all** four files.You are guaranteed that the same SSN value is present in **every** file, and that it only appears **once per file**.In addition, the files are all sorted by SSN, i.e. the SSN values appear in the same order in each file. Goal 1Your first task is to create iterators for each of the four files that contained cleaned up data, of the correct type (e.g. string, int, date, etc), and represented by a named tuple.For now these four iterators are just separate, independent iterators. Goal 2Create a single iterable that combines all the columns from all the iterators.The iterable should yield named tuples containing all the columns.Make sure that the SSN's across the files match!All the files are guaranteed to be in SSN sort order, and every SSN is unique, and every SSN appears in every file.Make sure the SSN is not repeated 4 times - one time per row is enough! Goal 3Next, you want to identify any stale records, where stale simply means the record has not been updated since 3/1/2017 (e.g. last update date < 3/1/2017). Create an iterator that only contains current records (i.e. not stale) based on the `last_updated` field from the `status_update` file. Goal 4Find the largest group of car makes for each gender.Possibly more than one such group per gender exists (equal sizes). Hints You will not be able to use a simple split approach here, as I explain in the video.Instead you should use the `csv` module and the `reader` function.Here's a simple example of how to use it - you will need to expand on this for your project goals, but this is a good starting point.
###Code
import csv
def read_file(file_name):
with open(file_name) as f:
rows = csv.reader(f, delimiter=',', quotechar='"')
yield from rows
from itertools import islice
rows = read_file('personal_info.csv')
for row in islice(rows, 5):
print(row)
###Output
['ssn', 'first_name', 'last_name', 'gender', 'language']
['100-53-9824', 'Sebastiano', 'Tester', 'Male', 'Icelandic']
['101-71-4702', 'Cayla', 'MacDonagh', 'Female', 'Lao']
['101-84-0356', 'Nomi', 'Lipprose', 'Female', 'Yiddish']
['104-22-0928', 'Justinian', 'Kunzelmann', 'Male', 'Dhivehi']
|
AutoSortFolders.ipynb | ###Markdown
Auto Sort Downloads Folder on macSort through certain file types in the downloads Folder- images (png, jpeg, jpg, etc.)- videos (mp4, etc.)
###Code
# Import dependencies
import os
import shutil
mainpath='/Users/jacobmannix/Desktop/folder'
mainfiles = os.listdir(sourcepath)
image_path = sourcepath + "/images"
video_path = sourcepath + "/videos"
audio_path = sourcepath + "/audio"
svg_path = sourcepath + "/images/svg"
# https://www.computerhope.com/issues/ch001789.htm
image_types = ('.jpeg', 'jpg', 'JPG', 'jpeg-2000', 'png', 'HEIC', 'openexr', 'tiff', 'gif', 'raw')
video_types = ('mp4', '.avi', 'mkv', '.h264', '.h265', 'm4v', 'mov', 'mpg', 'mpeg', 'wmv')
audio_types = ('aif', 'cda', 'mid', 'midi', 'mp3', 'mpa', 'ogg', 'wav', 'wma', 'wpl')
svg_types = ('.svg')
for file in mainfiles:
if file.endswith(image_types):
shutil.move(os.path.join(sourcepath, file), os.path.join(image_path, file))
elif file.endswith(video_types):
shutil.move(os.path.join(sourcepath, file), os.path.join(video_path, file))
elif file.endswith(audio_types):
shutil.move(os.path.join(sourcepath, file), os.path.join(video_path, file))
mainpath='/Users/jacobmannix/Desktop/folder'
mainfiles = os.listdir(sourcepath)
folders = ((image_types, image_path), (video_types, video_path), (audio_types, audio_path))
for types, path in folders:
for file in mainfiles:
if file.endswith(types):
shutil.move(os.path.join(sourcepath, file), os.path.join(sourcepath + path, file))
other = ('image')
types_path = ((other_type, other_path))
# print(types_path)
# Auto Sort Downloads
mainpath='/Users/jacobmannix/Desktop/folder'
mainfiles = os.listdir(mainpath)
folders = (
( # Images
"/images",
('.jpeg', 'jpg', 'JPG', 'jpeg-2000', 'png', 'HEIC', 'openexr', 'tiff', 'gif', 'raw')
),
( # Video
"/videos",
('mp4', '.avi', 'mkv', '.h264', '.h265', 'm4v', 'mov', 'mpg', 'mpeg', 'wmv')
),
( # Audio
"/audio",
('aif', 'cda', 'mid', 'midi', 'mp3', 'mpa', 'ogg', 'wav', 'wma', 'wpl')
),
( # SVG
"/images/svg",
('.svg')
)
)
for path, types in folders:
if os.path.isdir(sourcepath + path) == True:
for file in mainfiles:
if file.endswith(types):
shutil.move(os.path.join(sourcepath, file), os.path.join(sourcepath + path, file))
else:
os.mkdir(sourcepath + path)
path = '/Users/jacobmannix/Desktop/folder/videos'
os.mkdir(path)
if os.path.isdir(sourcepath + path) == True:
print(sourcepath + path)
else:
print('false')
###Output
false
|
Modulo2/Tarea4_GalindoAriadna.ipynb | ###Markdown
Tarea 4. Midiendo rendimiento y riesgo en un portafolio.**Resumen.**> En esta tarea, calcularás medidas de rendimiento esperado diario y volatilidad para cuatro diferentes portafolios. Usarás los históricos de precios que ya descargaste en la tarea anterior.**Criterio de revisión.**> Se te calificará de acuerdo a los resultados finales que reportes, basados en tu análisis.**Antes de comenzar.**> Por favor, copiar y pegar este archivo en otra ubicación. Antes de comenzar, nombrarlo *Tarea4_ApellidoNombre*, sin acentos y sin espacios; por ejemplo, en mi caso el archivo se llamaría *Tarea4_JimenezEsteban*. Resolver todos los puntos en dicho archivo y subir en este espacio. 1. Descarga de datos (20 puntos)Descargar los precios diarios ajustados en el cierre para el índice S&P 500 (^GSPC), Microsoft (MSFT), Walgreens (WBA), y Tesla Motors (TSLA) durante el periodo comprendido del primero de enero del 2011 hasta el 31 de diciembre del 2015.1. Mostrar el DataFrame de los precios diarios (5 puntos).2. Graficar los precios (5 puntos).3. Mostrar el DataFrame de los rendimientos porcentuales diarios (5 puntos).4. Graficar los rendimientos (5 puntos).
###Code
#importamos librerias
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pandas_datareader.data as web
def get_adj_closes(tickers, start_date= '2011-01-01' ,
end_date='2015-12-15'):
closes = web.DataReader(name=tickers,
data_source='yahoo',
start=start_date,
end=end_date)
closes = closes['Adj Close']
closes.sort_index(inplace=True)
return closes
#descargamos los datos
port=web.DataReader(name=['^GSPC','MSFT','WBA','TSLA'],
data_source='yahoo',
start='2011-01-01')
names= ['^GSPC','MSFT','WBA','TSLA']
start='2011-01-01'
end= '2015-12-15'
#1
closes= get_adj_closes(tickers=names,
start_date=start,
end_date= end)
closes.head()
#2
#graficamos
closes.plot()
#3
#obtenemos los rendimientos
r_port=closes.pct_change().dropna()
r_port.head()
#4
#graficamos los rendimientos
r_port.plot(grid=True)
###Output
_____no_output_____
###Markdown
2. Rendimiento esperado y volatilidad para cada activo (30 puntos)Usando los datos de rendimientos diarios de MSFT, WBA, y TSLA:1. Reportar en un DataFrame el rendimiento esperado diario y la volatilidad diaria para cada activo. Reportar en otro DataFrame el rendimiento esperado anual y la volatilidad anual para cada activo (10 puntos).2. Calcular la matriz de varianza-covarianza (base diaria) para los activos MSFT, WBA, y TSLA (10 puntos).3. Calcular la matriz de correlación (base diaria) para los activos MSFT, WBA, y TSLA (10 puntos).
###Code
#1
#rendimiento esperado y volatilidad diarios
tabla= pd.DataFrame(data={'Mean':r_port.mean(),
'Volatility':r_port.std()},
index=r_port.columns)
tabla
#1
#Cambiamos los datos diarios a anuales
tabla2= pd.DataFrame(data={'Mean':r_port.mean()*252
,'Volatility':np.sqrt(252)*r_port.std()},
index=r_port.columns)
tabla2
#2
#matriz varianza-covarianza
r_port.cov()
#3
#matriz correlaciones
r_port.corr()
###Output
_____no_output_____
###Markdown
3. Rentimiento esperado y volatilidad para portafolios (30 puntos)1. Calcular los rendimientos diarios de los siguientes portafolios. Reportar en un DataFrame el rendimiento esperado anual y la volatilidad anual para cada portafolio, calculando lo anterior tratando cada portafolio como si fuera un activo individual (15 puntos). - Portafolio 1: igualmente ponderado entre MSFT, WBA, y TSLA. - Portafolio 2: 30% MSFT, 20% WBA, y 50% TSLA. - Portafolio 3: 50% MSFT, 30% WBA, y 20% TSLA. - Portafolio 4: 20% MSFT, 50% WBA, y 30% TSLA.2. Para cada uno de los anteriores portafolios, reportar en otro DataFrame el rendimiento esperado anual y la volatilidad anual para cada portafolio, calculando lo anterior mediante las fórmulas de rendimiento esperado y volatilidad para portafolios derivadas en clase (10 puntos).3. Comparar los resultados del punto uno con los del punto dos (5 puntos).
###Code
#1
#añadimos los nuevos portafolios
r_port['Port1']= 1/3*r_port['MSFT']+1/3*r_port['WBA']+ 1/3*r_port['TSLA']
r_port['Port2']= 0.3*r_port['MSFT']+0.2*r_port['WBA']+ 0.5*r_port['TSLA']
r_port['Port3']= 0.5*r_port['MSFT']+0.3*r_port['WBA']+ 0.2*r_port['TSLA']
r_port['Port4']= 0.2*r_port['MSFT']+0.5*r_port['WBA']+ 0.3*r_port['TSLA']
r_port.head()
#1
#Obtenemos los rendimientos esperados
Er1 = r_port['Port1'].mean()
Er2 = r_port['Port2'].mean()
Er3 = r_port['Port3'].mean()
Er4 = r_port['Port4'].mean()
Er1, Er2, Er3, Er4
#obtenemos volatilidad
s1 = r_port['Port1'].std()
s2 = r_port['Port2'].std()
s3 = r_port['Port3'].std()
s4 = r_port['Port4'].std()
s1,s2,s3,s4
#1
#datos anuales en DataFrame
tabla3 = pd.DataFrame(data={'Mean':[Er1,Er2,Er3,Er4]
,'Volatility':[s1,s2,s3,s4]}
,index=['Port1','Port2','Port3','Port4'])
tabla3.Mean = tabla3.Mean*252
tabla3.Volatility = tabla3.Volatility*252**(1/2)
tabla3
#peso de los activos en los portafolios
tabla4 = pd.DataFrame([[0,1/3,1/3,1/3]
,[0,0.3,0.2,0.5]
,[0,0.5,0.3,0.2]
,[0,0.2,0.5,0.3]],
columns=['^GSPC','MSFT','WBA','TSLA']
,index=['Port12','Port22','Port32','Port42'])
tabla4
rE1=(tabla['Mean']*tabla4.iloc[0]).sum()
rE2=(tabla['Mean']*tabla4.iloc[1]).sum()
rE3=(tabla['Mean']*tabla4.iloc[2]).sum()
rE4=(tabla['Mean']*tabla4.iloc[3]).sum()
vol1 =((tabla['Mean']*(tabla4.iloc[0]-rE1)**2).sum())**0.5
vol2 =((tabla['Mean']*(tabla4.iloc[1]-rE2)**2).sum())**0.5
vol3 =((tabla['Mean']*(tabla4.iloc[2]-rE3)**2).sum())**0.5
vol4 =((tabla['Mean']*(tabla4.iloc[3]-rE4)**2).sum())**0.5
tabla5= pd.DataFrame(data={'Mean':[(rE1,rE2,rE3,rE4)*252] ,
'Volatility':[(np.sqrt(252)*(vol1, vol2, vol3, vol4))]},
index=['Port12','Port22','Port32','Port42'])
tabla5
###Output
_____no_output_____
###Markdown
**Observaciones**Con ambos métodos es posible realizar los calculos necesarios, asi como cambiarlos de diarios a anuales, pero creo yo que el primero es un poco más sencillo. 4. Gráfico de rendimientos esperados vs. volatilidad (20 puntos)Crear un gráfico de puntos que muestre el rendimiento esperado y la volatilidad para cada uno de los activos, el índice S&P500, y los cuatro portafolios en el espacio rendimiento esperado (eje y) contra volatilidad (eje x). Etiquetar cada uno de los puntos y los ejes apropiadamente.
###Code
#mostramos tabla
tabla
X = pd.concat([tabla3['Volatility'],tabla2['Volatility']])
Y = pd.concat([tabla3['Mean'],tabla2['Mean']])
plt.scatter(X,Y)
plt.xlabel('Volatility')
plt.ylabel('Expected return')
plt.text(X[0],Y[0], 'Port1')
plt.text(X[1],Y[1], 'Port2')
plt.text(X[2],Y[2], 'Port3')
plt.text(X[3],Y[3], 'Port4')
plt.text(X[4],Y[4], 'MSFT')
plt.text(X[5],Y[5], 'TSLA')
plt.text(X[6],Y[6], 'WBA')
plt.text(X[7],Y[7], 'GSPC')
plt.grid()
plt.show()
###Output
_____no_output_____ |
figure_DA.ipynb | ###Markdown
illusstration figure on best-possible analysis errors
###Code
exp_id = '92'
exp_ids_deepNet = [exp_id]
win_lens_deepNet, rmses_analysis_deepNet = get_analysis_rmses_4DVar_exp(exp_ids=exp_ids_deepNet)
plt.plot(np.array(rmses_analysis_deepNet).squeeze())
plt.show()
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
exp_names = os.listdir('experiments_DA/')
conf_exp = exp_names[np.where(np.array([name.split('_')[0] for name in exp_names])==str(exp_id))[0][0]][:-4]
args = setup_4DVar(conf_exp=f'experiments_DA/{conf_exp}.yml')
args.pop('conf_exp')
K,J = args['K'], args['J']
T_win = args['T_win']
model_pars = {
'exp_id' : args['model_exp_id'],
'model_forwarder' : 'rk4_default',
'K_net' : args['K'],
'J_net' : args['J'],
'dt_net' : args['dt']
}
model, model_forwarder, _ = get_model(model_pars, res_dir=res_dir, exp_dir='')
obs_pars = {'obs_operator' : ObsOp_rotsampleGaussian,
'obs_operator_args' : {'frq' : args['obs_operator_frq'],
'sigma2' : args['obs_operator_sig2']}}
model_observer = obs_pars['obs_operator'](**obs_pars['obs_operator_args'])
prior = torch.distributions.normal.Normal(loc=torch.zeros((1,J+1,K)),
scale=1.*torch.ones((1,J+1,K)))
gen = GenModel(model_forwarder, model_observer, prior, T=T_win, x_init=None)
save_dir = 'results/data_assimilation/' + args['exp_id'] + '/'
fn = save_dir + 'out.npy'
out = np.load(res_dir + fn, allow_pickle=True)[()]
def get_pred_rmses_4DVar_exp(exp_id, forecast_len=120):
exp_names = os.listdir('experiments_DA/')
conf_exp = exp_names[np.where(np.array([name.split('_')[0] for name in exp_names])==str(exp_id))[0][0]][:-4]
args = setup_4DVar(conf_exp=f'experiments_DA/{conf_exp}.yml')
args.pop('conf_exp')
#assert args['T_win'] == 64 # we want 4d integration window here
K,J = args['K'], args['J']
T_win = args['T_win']
model_pars = {
'exp_id' : args['model_exp_id'],
'model_forwarder' : 'rk4_default',
'K_net' : args['K'],
'J_net' : args['J'],
'dt_net' : args['dt']
}
model, model_forwarder, _ = get_model(model_pars, res_dir=res_dir, exp_dir='')
obs_operator = args['obs_operator']
obs_pars = {}
if obs_operator=='ObsOp_subsampleGaussian':
obs_pars['obs_operator'] = ObsOp_subsampleGaussian
obs_pars['obs_operator_args'] = {'r' : args['obs_operator_r'], 'sigma2' : args['obs_operator_sig2']}
elif obs_operator=='ObsOp_identity':
obs_pars['obs_operator'] = ObsOp_identity
obs_pars['obs_operator_args'] = {}
elif obs_operator=='ObsOp_rotsampleGaussian':
obs_pars['obs_operator'] = ObsOp_rotsampleGaussian
obs_pars['obs_operator_args'] = {'frq' : args['obs_operator_frq'],
'sigma2' : args['obs_operator_sig2']}
else:
raise NotImplementedError()
model_observer = obs_pars['obs_operator'](**obs_pars['obs_operator_args'])
prior = torch.distributions.normal.Normal(loc=torch.zeros((1,J+1,K)),
scale=1.*torch.ones((1,J+1,K)))
# ### define generative model for observed data
gen = GenModel(model_forwarder, model_observer, prior, T=T_win, x_init=None)
forecast_win = int(forecast_len/1.5) # 5d forecast
eval_every = int(1.5/1.5) # every 6h
save_dir = 'results/data_assimilation/' + args['exp_id'] + '/'
fn = save_dir + 'out.npy'
out = np.load(res_dir + fn, allow_pickle=True)[()]
J = args['J']
n_steps = args['n_steps']
T_win = args['T_win']
T_shift = args['T_shift'] if args['T_shift'] >= 0 else T_win
dt = args['dt']
data = out['out']
y, m = out['y'], out['m']
x_sols = out['x_sols']
print('percent of NaN sols', str(np.mean(np.isnan(x_sols))))
losses, times = out['losses'], out['times']
assert T_win == out['T_win']
mses = np.zeros(((data.shape[0] - forecast_win - T_win) // T_shift + 1, forecast_win//eval_every+1, y.shape[1]))
for i in range(len(mses)):
forecasts = gen._forward(x=as_tensor(x_sols[i]), T_obs=np.arange(0,forecast_win+1,eval_every))
n = i * T_shift
for j in range(mses.shape[1]): # loop over integration windows
forecast = forecasts[j].detach().cpu().numpy()
if np.any(np.isnan(forecast)):
print('warning - had NaN in forecasts!')
y_obs = data[n+j*eval_every]
mses[i,j] = np.nanmean((forecast - y_obs)**2, axis=(-2, -1))
pred_lens = 1.5/24 * np.arange(0, forecast_win+1, eval_every)
return pred_lens, np.sqrt(mses)
pred_lens_deepNet, rmses_pred_deepNet = get_pred_rmses_4DVar_exp(exp_id=exp_id, forecast_len=int(T_win*1.5))
np.mean(rmses_pred_deepNet[:,:,:].mean(axis=(0,2)))
i = 0
i_plot = 1
plt.figure(figsize=(12, 8))
plt.subplot(2,3,1)
plt.plot(rmses_pred_deepNet[:,:,:].mean(axis=(0,2)))
plt.ylabel('analysis RMSE')
plt.xlabel('position within integration window')
for offset in [0, np.argmin(rmses_pred_deepNet[:,:,:].mean(axis=(0,2)))]:
x_true = sortL96fromChannels(out['out'])[offset:out['x_sols'].shape[0]+offset,i,:].T
x_sols = sortL96fromChannels(out['x_sols'])[:,i,:].T
x_pred = gen._forward(sortL96intoChannels(as_tensor(x_sols.T),J=0) , T_obs=[offset])[0].detach().cpu().numpy()
x_pred = sortL96fromChannels(x_pred).T
plt.subplot(3,3,i_plot+1)
plt.imshow(x_true, aspect='auto')
plt.colorbar()
if i_plot == 1:
plt.ylabel('true state')
plt.yticks([])
plt.title(f'offset={offset}')
plt.subplot(3,3,i_plot+4)
plt.imshow(x_pred, aspect='auto')
plt.colorbar()
if i_plot == 1:
plt.ylabel('4D-Var analysis')
plt.yticks([])
plt.subplot(3,3,i_plot+7)
plt.imshow(x_true - x_pred, cmap='bwr', aspect='auto')
plt.colorbar()
if i_plot == 1:
plt.ylabel('difference')
plt.yticks([])
i_plot += 1
plt.show()
###Output
_____no_output_____ |
notebooks/cv/01_image_basics.ipynb | ###Markdown
###Code
from matplotlib import pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Black Image
###Code
black = np.zeros([10,10])
black
plt.imshow(np.zeros([10,10]), cmap="gray", vmin=0, vmax=255)
###Output
_____no_output_____
###Markdown
White Image
###Code
white = np.full((10,10), 255)
white
white.shape
plt.imshow(white, cmap="gray", vmin=0, vmax=255)
###Output
_____no_output_____
###Markdown
Gray Image
###Code
gray = np.full((10,10), 170)
gray
plt.imshow(gray, cmap="gray", vmin=0, vmax=255)
###Output
_____no_output_____
###Markdown
Addressing Pixels
###Code
gray[0,0] = 0
gray
plt.imshow(gray, cmap="gray", vmin=0, vmax=255)
###Output
_____no_output_____
###Markdown
Addressing Ranges
###Code
gray
gray[0:8,0:2] = 0
gray
plt.imshow(gray, cmap="gray", vmin=0, vmax=255)
###Output
_____no_output_____
###Markdown
Colors
###Code
rgb = np.zeros((10,10,3))
plt.imshow(rgb, vmin=0, vmax=255)
rgb[:,:,2] = 255
plt.imshow(rgb, vmin=0, vmax=255)
rgb[0,0,0] = 170
plt.imshow(rgb, vmin=0, vmax=255)
###Output
_____no_output_____ |
Lesson-05_Logistic_Classification.ipynb | ###Markdown
Lab 5: Logistic Classification Author: Seungjae Lee (이승재) We use elemental PyTorch to implement linear regression here. However, in most actual applications, abstractions such as nn.Module or nn.Linear are used. You can see those implementations near the end of this notebook. Reminder: Logistic Regression Hypothesis $$ H(X) = \frac{1}{1+e^{-W^T X}} $$ Cost $$ cost(W) = -\frac{1}{m} \sum y \log\left(H(x)\right) + (1-y) \left( \log(1-H(x) \right) $$ - If $y \simeq H(x)$, cost is near 0. - If $y \neq H(x)$, cost is high. Weight Update via Gradient Descent $$ W := W - \alpha \frac{\partial}{\partial W} cost(W) $$ - $\alpha$: Learning rate Imports
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# For reproducibility
torch.manual_seed(1)
###Output
_____no_output_____
###Markdown
Training Data
###Code
x_data = [[1, 2], [2, 3], [3, 1], [4, 3], [5, 3], [6, 2]]
y_data = [[0], [0], [0], [1], [1], [1]]
###Output
_____no_output_____
###Markdown
Consider the following classification problem: given the number of hours each student spent watching the lecture and working in the code lab, predict whether the student passed or failed a course. For example, the first (index 0) student watched the lecture for 1 hour and spent 2 hours in the lab session ([1, 2]), and ended up failing the course ([0]).
###Code
x_train = torch.FloatTensor(x_data)
y_train = torch.FloatTensor(y_data)
###Output
_____no_output_____
###Markdown
As always, we need these data to be in `torch.Tensor` format, so we convert them.
###Code
print(x_train.shape)
print(y_train.shape)
###Output
torch.Size([6, 2])
torch.Size([6, 1])
###Markdown
Computing the Hypothesis $$ H(X) = \frac{1}{1+e^{-W^T X}} $$ PyTorch has a `torch.exp()` function that resembles the exponential function.
###Code
print('e^1 equals: ', torch.exp(torch.FloatTensor([1])))
###Output
e^1 equals: tensor([2.7183])
###Markdown
We can use it to compute the hypothesis function conveniently.
###Code
W = torch.zeros((2, 1), requires_grad=True)
b = torch.zeros(1, requires_grad=True)
hypothesis = 1 / (1 + torch.exp(-(x_train.matmul(W) + b)))
print(hypothesis)
print(hypothesis.shape)
###Output
tensor([[0.5000],
[0.5000],
[0.5000],
[0.5000],
[0.5000],
[0.5000]], grad_fn=<MulBackward>)
torch.Size([6, 1])
###Markdown
Or, we could use `torch.sigmoid()` function! This resembles the sigmoid function:
###Code
print('1/(1+e^{-1}) equals: ', torch.sigmoid(torch.FloatTensor([1])))
###Output
1/(1+e^{-1}) equals: tensor([0.7311])
###Markdown
Now, the code for hypothesis function is cleaner.
###Code
hypothesis = torch.sigmoid(x_train.matmul(W) + b)
print(hypothesis)
print(hypothesis.shape)
###Output
tensor([[0.5000],
[0.5000],
[0.5000],
[0.5000],
[0.5000],
[0.5000]], grad_fn=<SigmoidBackward>)
torch.Size([6, 1])
###Markdown
Computing the Cost Function (Low-level) $$ cost(W) = -\frac{1}{m} \sum y \log\left(H(x)\right) + (1-y) \left( \log(1-H(x) \right) $$ We want to measure the difference between `hypothesis` and `y_train`.
###Code
print(hypothesis)
print(y_train)
###Output
tensor([[0.5000],
[0.5000],
[0.5000],
[0.5000],
[0.5000],
[0.5000]], grad_fn=<SigmoidBackward>)
tensor([[0.],
[0.],
[0.],
[1.],
[1.],
[1.]])
###Markdown
For one element, the loss can be computed as follows:
###Code
-(y_train[0] * torch.log(hypothesis[0]) +
(1 - y_train[0]) * torch.log(1 - hypothesis[0]))
###Output
_____no_output_____
###Markdown
To compute the losses for the entire batch, we can simply input the entire vector.
###Code
losses = -(y_train * torch.log(hypothesis) +
(1 - y_train) * torch.log(1 - hypothesis))
print(losses)
###Output
tensor([[0.6931],
[0.6931],
[0.6931],
[0.6931],
[0.6931],
[0.6931]], grad_fn=<NegBackward>)
###Markdown
Then, we just `.mean()` to take the mean of these individual losses.
###Code
cost = losses.mean()
print(cost)
###Output
tensor(0.6931, grad_fn=<MeanBackward1>)
###Markdown
Computing the Cost Function with `F.binary_cross_entropy` In reality, binary classification is used so often that PyTorch has a simple function called `F.binary_cross_entropy` implemented to lighten the burden.
###Code
F.binary_cross_entropy(hypothesis, y_train)
###Output
_____no_output_____
###Markdown
Training with Low-level Binary Cross Entropy Loss
###Code
x_data = [[1, 2], [2, 3], [3, 1], [4, 3], [5, 3], [6, 2]]
y_data = [[0], [0], [0], [1], [1], [1]]
x_train = torch.FloatTensor(x_data)
y_train = torch.FloatTensor(y_data)
# 모델 초기화
W = torch.zeros((2, 1), requires_grad=True)
b = torch.zeros(1, requires_grad=True)
# optimizer 설정
optimizer = optim.SGD([W, b], lr=1)
nb_epochs = 1000
for epoch in range(nb_epochs + 1):
# Cost 계산
hypothesis = torch.sigmoid(x_train.matmul(W) + b) # or .mm or @
cost = -(y_train * torch.log(hypothesis) +
(1 - y_train) * torch.log(1 - hypothesis)).mean()
# cost로 H(x) 개선
optimizer.zero_grad()
cost.backward()
optimizer.step()
# 100번마다 로그 출력
if epoch % 100 == 0:
print('Epoch {:4d}/{} Cost: {:.6f}'.format(
epoch, nb_epochs, cost.item()
))
###Output
Epoch 0/1000 Cost: 0.693147
Epoch 100/1000 Cost: 0.134722
Epoch 200/1000 Cost: 0.080643
Epoch 300/1000 Cost: 0.057900
Epoch 400/1000 Cost: 0.045300
Epoch 500/1000 Cost: 0.037261
Epoch 600/1000 Cost: 0.031673
Epoch 700/1000 Cost: 0.027556
Epoch 800/1000 Cost: 0.024394
Epoch 900/1000 Cost: 0.021888
Epoch 1000/1000 Cost: 0.019852
###Markdown
Training with `F.binary_cross_entropy`
###Code
# 모델 초기화
W = torch.zeros((2, 1), requires_grad=True)
b = torch.zeros(1, requires_grad=True)
# optimizer 설정
optimizer = optim.SGD([W, b], lr=1)
nb_epochs = 1000
for epoch in range(nb_epochs + 1):
# Cost 계산
hypothesis = torch.sigmoid(x_train.matmul(W) + b) # or .mm or @
cost = F.binary_cross_entropy(hypothesis, y_train)
# cost로 H(x) 개선
optimizer.zero_grad()
cost.backward()
optimizer.step()
# 100번마다 로그 출력
if epoch % 100 == 0:
print('Epoch {:4d}/{} Cost: {:.6f}'.format(
epoch, nb_epochs, cost.item()
))
###Output
Epoch 0/1000 Cost: 0.693147
Epoch 100/1000 Cost: 0.134722
Epoch 200/1000 Cost: 0.080643
Epoch 300/1000 Cost: 0.057900
Epoch 400/1000 Cost: 0.045300
Epoch 500/1000 Cost: 0.037261
Epoch 600/1000 Cost: 0.031672
Epoch 700/1000 Cost: 0.027556
Epoch 800/1000 Cost: 0.024394
Epoch 900/1000 Cost: 0.021888
Epoch 1000/1000 Cost: 0.019852
###Markdown
Loading Real Data
###Code
import numpy as np
xy = np.loadtxt('data-03-diabetes.csv', delimiter=',', dtype=np.float32)
x_data = xy[:, 0:-1]
y_data = xy[:, [-1]]
x_train = torch.FloatTensor(x_data)
y_train = torch.FloatTensor(y_data)
print(x_train[0:5])
print(y_train[0:5])
###Output
tensor([[-0.2941, 0.4874, 0.1803, -0.2929, 0.0000, 0.0015, -0.5312, -0.0333],
[-0.8824, -0.1457, 0.0820, -0.4141, 0.0000, -0.2072, -0.7669, -0.6667],
[-0.0588, 0.8392, 0.0492, 0.0000, 0.0000, -0.3055, -0.4927, -0.6333],
[-0.8824, -0.1055, 0.0820, -0.5354, -0.7778, -0.1624, -0.9240, 0.0000],
[ 0.0000, 0.3769, -0.3443, -0.2929, -0.6028, 0.2846, 0.8873, -0.6000]])
tensor([[0.],
[1.],
[0.],
[1.],
[0.]])
###Markdown
Training with Real Data using low-level Binary Cross Entropy Loss
###Code
# 모델 초기화
W = torch.zeros((8, 1), requires_grad=True)
b = torch.zeros(1, requires_grad=True)
# optimizer 설정
optimizer = optim.SGD([W, b], lr=1)
nb_epochs = 100
for epoch in range(nb_epochs + 1):
# Cost 계산
hypothesis = torch.sigmoid(x_train.matmul(W) + b) # or .mm or @
cost = -(y_train * torch.log(hypothesis) + (1 - y_train) * torch.log(1 - hypothesis)).mean()
# cost로 H(x) 개선
optimizer.zero_grad()
cost.backward()
optimizer.step()
# 10번마다 로그 출력
if epoch % 10 == 0:
print('Epoch {:4d}/{} Cost: {:.6f}'.format(
epoch, nb_epochs, cost.item()
))
###Output
Epoch 0/100 Cost: 0.693148
Epoch 10/100 Cost: 0.572727
Epoch 20/100 Cost: 0.539493
Epoch 30/100 Cost: 0.519708
Epoch 40/100 Cost: 0.507066
Epoch 50/100 Cost: 0.498539
Epoch 60/100 Cost: 0.492549
Epoch 70/100 Cost: 0.488209
Epoch 80/100 Cost: 0.484985
Epoch 90/100 Cost: 0.482543
Epoch 100/100 Cost: 0.480661
###Markdown
Training with Real Data using `F.binary_cross_entropy`
###Code
# 모델 초기화
W = torch.zeros((8, 1), requires_grad=True)
b = torch.zeros(1, requires_grad=True)
# optimizer 설정
optimizer = optim.SGD([W, b], lr=1)
nb_epochs = 100
for epoch in range(nb_epochs + 1):
# Cost 계산
hypothesis = torch.sigmoid(x_train.matmul(W) + b) # or .mm or @
cost = F.binary_cross_entropy(hypothesis, y_train)
# cost로 H(x) 개선
optimizer.zero_grad()
cost.backward()
optimizer.step()
# 10번마다 로그 출력
if epoch % 10 == 0:
print('Epoch {:4d}/{} Cost: {:.6f}'.format(
epoch, nb_epochs, cost.item()
))
###Output
Epoch 0/100 Cost: 0.693147
Epoch 10/100 Cost: 0.572727
Epoch 20/100 Cost: 0.539494
Epoch 30/100 Cost: 0.519708
Epoch 40/100 Cost: 0.507065
Epoch 50/100 Cost: 0.498539
Epoch 60/100 Cost: 0.492549
Epoch 70/100 Cost: 0.488208
Epoch 80/100 Cost: 0.484985
Epoch 90/100 Cost: 0.482543
Epoch 100/100 Cost: 0.480661
###Markdown
Checking the Accuracy our our Model After we finish training the model, we want to check how well our model fits the training set.
###Code
hypothesis = torch.sigmoid(x_train.matmul(W) + b)
print(hypothesis[:5])
###Output
tensor([[0.4103],
[0.9242],
[0.2300],
[0.9411],
[0.1772]], grad_fn=<SliceBackward>)
###Markdown
We can change **hypothesis** (real number from 0 to 1) to **binary predictions** (either 0 or 1) by comparing them to 0.5.
###Code
prediction = hypothesis >= torch.FloatTensor([0.5])
print(prediction[:5])
###Output
tensor([[0],
[1],
[0],
[1],
[0]], dtype=torch.uint8)
###Markdown
Then, we compare it with the correct labels `y_train`.
###Code
print(prediction[:5])
print(y_train[:5])
correct_prediction = prediction.float() == y_train
print(correct_prediction[:5])
###Output
tensor([[1],
[1],
[1],
[1],
[1]], dtype=torch.uint8)
###Markdown
Finally, we can calculate the accuracy by counting the number of correct predictions and dividng by total number of predictions.
###Code
accuracy = correct_prediction.sum().item() / len(correct_prediction)
print('The model has an accuracy of {:2.2f}% for the training set.'.format(accuracy * 100))
###Output
The model has an accuracy of 76.68% for the training set.
###Markdown
Optional: High-level Implementation with `nn.Module`
###Code
class BinaryClassifier(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(8, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
return self.sigmoid(self.linear(x))
model = BinaryClassifier()
# optimizer 설정
optimizer = optim.SGD(model.parameters(), lr=1)
nb_epochs = 100
for epoch in range(nb_epochs + 1):
# H(x) 계산
hypothesis = model(x_train)
# cost 계산
cost = F.binary_cross_entropy(hypothesis, y_train)
# cost로 H(x) 개선
optimizer.zero_grad()
cost.backward()
optimizer.step()
# 20번마다 로그 출력
if epoch % 10 == 0:
prediction = hypothesis >= torch.FloatTensor([0.5])
correct_prediction = prediction.float() == y_train
accuracy = correct_prediction.sum().item() / len(correct_prediction)
print('Epoch {:4d}/{} Cost: {:.6f} Accuracy {:2.2f}%'.format(
epoch, nb_epochs, cost.item(), accuracy * 100,
))
###Output
Epoch 0/100 Cost: 0.704829 Accuracy 45.72%
Epoch 10/100 Cost: 0.572391 Accuracy 67.59%
Epoch 20/100 Cost: 0.539563 Accuracy 73.25%
Epoch 30/100 Cost: 0.520042 Accuracy 75.89%
Epoch 40/100 Cost: 0.507561 Accuracy 76.15%
Epoch 50/100 Cost: 0.499125 Accuracy 76.42%
Epoch 60/100 Cost: 0.493177 Accuracy 77.21%
Epoch 70/100 Cost: 0.488846 Accuracy 76.81%
Epoch 80/100 Cost: 0.485612 Accuracy 76.28%
Epoch 90/100 Cost: 0.483146 Accuracy 76.55%
Epoch 100/100 Cost: 0.481234 Accuracy 76.81%
|
NeoBlog.ipynb | ###Markdown
Grab data Commentary:The popular [Abalone](https://archive.ics.uci.edu/ml/datasets/Abalone) data set originally from the UCI data repository \[1\] will be used.> \[1\] Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
###Code
from pathlib import Path
import boto3
for p in ['raw_data', 'training_data', 'validation_data']:
Path(p).mkdir(exist_ok=True)
s3 = boto3.client('s3')
s3.download_file('sagemaker-sample-files', 'datasets/tabular/uci_abalone/abalone.libsvm', 'raw_data/abalone')
###Output
_____no_output_____
###Markdown
Prepare training and validation data
###Code
from sklearn.datasets import load_svmlight_file, dump_svmlight_file
from sklearn.model_selection import train_test_split
X, y = load_svmlight_file('raw_data/abalone')
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=1984, shuffle=True)
dump_svmlight_file(x_train, y_train, 'training_data/abalone.train')
dump_svmlight_file(x_test, y_test, 'validation_data/abalone.test')
###Output
_____no_output_____
###Markdown
Train model Commentary:Notice that the [SageMaker XGBoost container](https://github.com/aws/sagemaker-xgboost-container) framework version is set to be `1.2-1`. This is extremely important – the older `0.90-2` version will NOT work with SageMaker Neo out of the box. This is because in February of 2021, the SageMaker Neo team updated their XGBoost library version to `1.2` and backwards compatibility was not kept.Moreover, notice that we are using the open source XGBoost algorithm version, so we must provide our own training script and model loading function. These two required components are defined in `entrypoint.py`, which is part of the `neo-blog` repository. The training script is very basic, and the inspiration was taken from another sample notebook [here](https://github.com/aws/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/xgboost_abalone/xgboost_abalone_dist_script_mode.ipynb). Please note also that for `instance_count` and `instance_type`, the values are `1` and `local`, respectively, which means that the training job will run locally on our notebook instance. This is beneficial because it eliminates the startup time of training instances when a job runs remotely instead.Finally, notice that the number of boosting rounds has been set to 10,000. This means that the model will consist of 10,000 individual trees and will be computationally expensive to run, which we want for load testing purposes. A side effect will be that the model will severely overfit on the training data, but that is okay since accuracy is not a priority here. A computationally expensive model could have also been achieved by increasing the `max_depth` parameter as well.
###Code
import sagemaker
from sagemaker.xgboost.estimator import XGBoost
from sagemaker.session import Session
from sagemaker.inputs import TrainingInput
bucket = Session().default_bucket()
role = sagemaker.get_execution_role()
# initialize hyperparameters
hyperparameters = {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"verbosity":"1",
"objective":"reg:squarederror",
"num_round":"10000"
}
# construct a SageMaker XGBoost estimator
# specify the entry_point to your xgboost training script
estimator = XGBoost(entry_point = "entrypoint.py",
framework_version='1.2-1', # 1.x MUST be used
hyperparameters=hyperparameters,
role=role,
instance_count=1,
instance_type='local',
output_path=f's3://{bucket}/neo-demo') # gets saved in bucket/neo-demo/job_name/model.tar.gz
# define the data type and paths to the training and validation datasets
content_type = "libsvm"
train_input = TrainingInput('file://training_data', content_type=content_type)
validation_input = TrainingInput('file://validation_data', content_type=content_type)
# execute the XGBoost training job
estimator.fit({'train': train_input, 'validation': validation_input}, logs=['Training'])
###Output
_____no_output_____
###Markdown
Deploy unoptimized model Commentary:There are two interesting things to note here. The first of which is that although the training job was local, the model artifact was still set up to be stored in [Amazon S3](https://aws.amazon.com/s3/) upon job completion. The other peculiarity here is that we must create an `XGBoostModel` object and use its `deploy` method, rather than calling the `deploy` method of the estimator itself. This is due to the fact that we ran the training job in local mode, so the estimator is not aware of any “official” training job that is viewable in the SageMaker console and associable with the model artifact. Because of this, the estimator will error out if its own `deploy` method is used, and the `XGBoostModel` object must be constructed first instead. Notice also that we will be hosting the model on a c5 (compute-optimized) instance type. This instance will be particularly well suited for hosting the XGBoost model, since XGBoost by default runs on CPU and it’s a CPU-bound algorithm for inference (on the other hand, during training XGBoost is a memory bound algorithm). The c5.large instance type is also marginally cheaper to run in the us-east-1 region at $0.119 per hour compared to a t2.large at $0.1299 per hour.
###Code
from sagemaker.xgboost.model import XGBoostModel
# grab the model artifact that was written out by the local training job
s3_model_artifact = estimator.latest_training_job.describe()['ModelArtifacts']['S3ModelArtifacts']
# we have to switch from local mode to remote mode
xgboost_model = XGBoostModel(
model_data=s3_model_artifact,
role=role,
entry_point="entrypoint.py",
framework_version='1.2-1',
)
unoptimized_endpoint_name = 'unoptimized-c5'
xgboost_model.deploy(
initial_instance_count = 1,
instance_type='ml.c5.large',
endpoint_name=unoptimized_endpoint_name
)
###Output
_____no_output_____
###Markdown
Optimize model with SageMaker Neo
###Code
job_name = s3_model_artifact.split("/")[-2]
neo_model = xgboost_model.compile(
target_instance_family="ml_c5",
role=role,
input_shape =f'{{"data": [1, {X.shape[1]}]}}',
output_path =f's3://{bucket}/neo-demo/{job_name}', # gets saved in bucket/neo-demo/model-ml_c5.tar.gz
framework = "xgboost",
job_name=job_name # what it shows up as in console
)
###Output
_____no_output_____
###Markdown
Deploy Neo model
###Code
optimized_endpoint_name = 'neo-optimized-c5'
neo_model.deploy(
initial_instance_count = 1,
instance_type='ml.c5.large',
endpoint_name=optimized_endpoint_name
)
###Output
_____no_output_____
###Markdown
Validate that endpoints are working
###Code
import boto3
smr = boto3.client('sagemaker-runtime')
resp = smr.invoke_endpoint(EndpointName='neo-optimized-c5', Body=b'2,0.675,0.55,0.175,1.689,0.694,0.371,0.474', ContentType='text/csv')
print('neo-optimized model response: ', resp['Body'].read())
resp = smr.invoke_endpoint(EndpointName='unoptimized-c5', Body=b'2,0.675,0.55,0.175,1.689,0.694,0.371,0.474', ContentType='text/csv')
print('unoptimized model response: ', resp['Body'].read())
###Output
_____no_output_____
###Markdown
Create CloudWatch dashboard for monitoring performance
###Code
import json
cw = boto3.client('cloudwatch')
dashboard_name = 'NeoDemo'
region = Session().boto_region_name # get region we're currently in
body = {
"widgets": [
{
"type": "metric",
"x": 0,
"y": 0,
"width": 24,
"height": 12,
"properties": {
"metrics": [
[ "AWS/SageMaker", "Invocations", "EndpointName", optimized_endpoint_name, "VariantName", "AllTraffic", { "stat": "Sum", "yAxis": "left" } ],
[ "...", unoptimized_endpoint_name, ".", ".", { "stat": "Sum", "yAxis": "left" } ],
[ ".", "ModelLatency", ".", ".", ".", "." ],
[ "...", optimized_endpoint_name, ".", "." ],
[ "/aws/sagemaker/Endpoints", "CPUUtilization", ".", ".", ".", ".", { "yAxis": "right" } ],
[ "...", unoptimized_endpoint_name, ".", ".", { "yAxis": "right" } ]
],
"view": "timeSeries",
"stacked": False,
"region": region,
"stat": "Average",
"period": 60,
"title": "Performance Metrics",
"start": "-PT1H",
"end": "P0D"
}
}
]
}
cw.put_dashboard(DashboardName=dashboard_name, DashboardBody=json.dumps(body))
print('link to dashboard:')
print(f'https://console.aws.amazon.com/cloudwatch/home?region={region}#dashboards:name={dashboard_name}')
###Output
_____no_output_____
###Markdown
Install node.js
###Code
%conda install -c conda-forge nodejs
###Output
_____no_output_____
###Markdown
Validate successful installation
###Code
!node --version
###Output
_____no_output_____
###Markdown
Install Serverless framework and Serverless Artillery
###Code
!npm install -g [email protected] [email protected]
###Output
_____no_output_____
###Markdown
Validate successful installations
###Code
!serverless --version
!slsart --version
###Output
_____no_output_____
###Markdown
Deploy Serverless Artillery Commentary:The most important file that makes up part of the load generating function under the `serverless_artillery` directory is `processor.js`, which is responsible for generating the payload body and signed headers of each request that gets sent to the SageMaker endpoints. Please take a moment to review the file’s contents. In it, you’ll see that we’re manually signing our requests using the AWS Signature Version 4 algorithm. When you use any AWS SDK like [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html), your requests are automatically signed for you by the library. Here, however, we are directly interacting with AWS’s SageMaker API endpoints, so we must sign requests ourselves. The access keys and session token of the load-generating lambda function’s role are used to sign the request, and the role is given permissions to invoke SageMaker endpoints in its role statements (defined in serverless.yml on line 18). When a request is sent, AWS will first validate the signed headers, then validate that the assumed role has permission to invoke endpoints, and then finally let the request from the Lambda to pass through.
###Code
!cd serverless_artillery && npm install && slsart deploy --stage dev
###Output
_____no_output_____
###Markdown
Create Serverless Artillery load test script
###Code
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def writefilewithvariables(line, cell):
with open(line, 'w') as f:
f.write(cell.format(**globals()))
# Get region that we're currently in
region = Session().boto_region_name
%%writefilewithvariables script.yaml
config:
variables:
unoptimizedEndpointName: {unoptimized_endpoint_name} # the xgboost model has 10000 trees
optimizedEndpointName: {optimized_endpoint_name} # the xgboost model has 10000 trees
numRowsInRequest: 125 # Each request to the endpoint contains 125 rows
target: 'https://runtime.sagemaker.{region}.amazonaws.com'
phases:
- duration: 120
arrivalRate: 20 # 1200 total invocations per minute (600 per endpoint)
- duration: 120
arrivalRate: 40 # 2400 total invocations per minute (1200 per endpoint)
- duration: 120
arrivalRate: 60 # 3600 total invocations per minute (1800 per endpoint)
- duration: 120
arrivalRate: 80 # 4800 invocations per minute (2400 per endpoint... this is the max of the unoptimized endpoint)
- duration: 120
arrivalRate: 120 # only the neo endpoint can handle this load...
- duration: 120
arrivalRate: 160
processor: './processor.js'
scenarios:
- flow:
- post:
url: '/endpoints/{{{{ unoptimizedEndpointName }}}}/invocations'
beforeRequest: 'setRequest'
- flow:
- post:
url: '/endpoints/{{{{ optimizedEndpointName }}}}/invocations'
beforeRequest: 'setRequest'
###Output
_____no_output_____
###Markdown
Perform load tests
###Code
!slsart invoke --stage dev --path script.yaml
print("Here's the link to the dashboard again:")
print(f'https://console.aws.amazon.com/cloudwatch/home?region={region}#dashboards:name={dashboard_name}')
###Output
_____no_output_____
###Markdown
Clean up resources
###Code
# delete endpoints and endpoint configurations
sm = boto3.client('sagemaker')
for name in [unoptimized_endpoint_name, optimized_endpoint_name]:
sm.delete_endpoint(EndpointName=name)
sm.delete_endpoint_config(EndpointConfigName=name)
# remove serverless artillery resources
!slsart remove --stage dev
###Output
_____no_output_____ |
densenet exp.ipynb | ###Markdown
changed packages:keras 2.2.4 to 2.4.3keras pre processing : 1.0.9 to 1.1.2pillow : 5.3.0 to 7.1.2
###Code
!pip install Pillow==5.3.0 Keras==2.2.4 Keras-Preprocessing==1.0.9
pip install absl-py==0.12.0 alabaster==0.7.12 albumentations==0.1.12 altair==4.1.0 appdirs==1.4.4 argon2-cffi==20.1.0 astor==0.8.1 astropy==4.2.1 astunparse==1.6.3 async-generator==1.10 atari-py==0.2.6 atomicwrites==1.4.0 attrs==20.3.0 audioread==2.1.9 autograd==1.3 Babel==2.9.0 backcall==0.2.0 blis==0.4.1 bokeh==2.3.1 Bottleneck==1.3.2 branca==0.4.2 catalogue==1.0.0 certifi==2020.12.5 cffi==1.14.5 chainer==7.4.0 chardet==3.0.4 click==7.1.2 cloudpickle==1.3.0 cmake==3.12.0 cmdstanpy==0.9.5 colorcet==2.0.6 colorlover==0.3.0 community==1.0.0b1 contextlib2==0.5.5 convertdate==2.3.2 coverage==3.7.1 coveralls==0.5 crcmod==1.7 cufflinks==0.17.3 cupy-cuda101==7.4.0 cvxopt==1.2.6 cvxpy==1.0.31 cycler==0.10.0 cymem==2.0.5 Cython==0.29.22 daft==0.0.4 dask==2.12.0 datascience==0.10.6 debugpy==1.0.0 decorator==4.4.2 defusedxml==0.7.1 descartes==1.1.0 dill==0.3.3 distributed==1.25.3 dlib==19.18.0 dm-tree==0.1.6 docopt==0.6.2 docutils==0.17 dopamine-rl==1.0.5 earthengine-api==0.1.260 easydict==1.9 ecos==2.0.7.post1 editdistance==0.5.3 en-core-web-sm==2.2.5 entrypoints==0.3 ephem==3.7.7.1 et-xmlfile==1.0.1 fa2==0.3.5 fancyimpute==0.4.3 fastprogress==1.0.0 fastrlock==0.6 fbprophet==0.7.1 feather-format==0.4.1 filelock==3.0.12 firebase-admin==4.4.0 fix-yahoo-finance==0.0.22 Flask==1.1.2 flatbuffers==1.12 folium==0.8.3 future==0.16.0 gast==0.3.3 GDAL==2.2.2 gdown==3.6.4 gensim==3.6.0 geographiclib==1.50 geopy==1.17.0 gin-config==0.4.0 glob2==0.7
google==2.0.3
google-api-core==1.26.3
google-api-python-client==1.12.8
google-auth==1.28.1
google-auth-httplib2==0.0.4
google-auth-oauthlib==0.4.4
google-cloud-bigquery==1.21.0
google-cloud-bigquery-storage==1.1.0
google-cloud-core==1.0.3
google-cloud-datastore==1.8.0
google-cloud-firestore==1.7.0
google-cloud-language==1.2.0
google-cloud-storage==1.18.1
google-cloud-translate==1.5.0
google-colab==1.0.0
google-pasta==0.2.0
google-resumable-media==0.4.1
googleapis-common-protos==1.53.0
googledrivedownloader==0.4
graphviz==0.10.1
greenlet==1.0.0
grpcio==1.32.0
gspread==3.0.1
gspread-dataframe==3.0.8
gym==0.17.3
h5py==2.10.0
HeapDict==1.0.1
hijri-converter==2.1.1
holidays==0.10.5.2
holoviews==1.14.3
html5lib==1.0.1
httpimport==0.5.18
httplib2==0.17.4
httplib2shim==0.0.3
humanize==0.5.1
hyperopt==0.1.2
ideep4py==2.0.0.post3
idna==2.10
imageio==2.4.1
imagesize==1.2.0
imbalanced-learn==0.4.3
imblearn==0.0
imgaug==0.2.9
importlib-metadata==3.10.1
importlib-resources==5.1.2
imutils==0.5.4
inflect==2.1.0
iniconfig==1.1.1
intel-openmp==2021.2.0
intervaltree==2.1.0
ipykernel==4.10.1
ipython==5.5.0
ipython-genutils==0.2.0
ipython-sql==0.3.9
ipywidgets==7.6.3
itsdangerous==1.1.0
jax==0.2.12
jaxlib==0.1.65+cuda110
jdcal==1.4.1
jedi==0.18.0
jieba==0.42.1
Jinja2==2.11.3
joblib==1.0.1
jpeg4py==0.1.4
jsonschema==2.6.0
jupyter==1.0.0
jupyter-client==5.3.5
jupyter-console==5.2.0
jupyter-core==4.7.1
jupyterlab-pygments==0.1.2
jupyterlab-widgets==1.0.0
kaggle==1.5.12
kapre==0.1.3.1
Keras==2.4.3
Keras-Preprocessing==1.1.2
keras-vis==0.4.1
kiwisolver==1.3.1
knnimpute==0.1.0
korean-lunar-calendar==0.2.1
librosa==0.8.0
lightgbm==2.2.3
llvmlite==0.34.0
lmdb==0.99
LunarCalendar==0.0.9
lxml==4.2.6
Markdown==3.3.4
MarkupSafe==1.1.1
matplotlib==3.2.2
matplotlib-venn==0.11.6
missingno==0.4.2
mistune==0.8.4
mizani==0.6.0
mkl==2019.0
mlxtend==0.14.0
more-itertools==8.7.0
moviepy==0.2.3.5
mpmath==1.2.1
msgpack==1.0.2
multiprocess==0.70.11.1
multitasking==0.0.9
murmurhash==1.0.5
music21==5.5.0
natsort==5.5.0
nbclient==0.5.3
nbconvert==5.6.1
nbformat==5.1.3
nest-asyncio==1.5.1
networkx==2.5.1
nibabel==3.0.2
nltk==3.2.5
notebook==5.3.1
np-utils==0.5.12.1
numba==0.51.2
numexpr==2.7.3
numpy==1.19.5
nvidia-ml-py3==7.352.0
oauth2client==4.1.3
oauthlib==3.1.0
okgrade==0.4.3
opencv-contrib-python==4.1.2.30
opencv-python==4.1.2.30
openpyxl==2.5.9
opt-einsum==3.3.0
osqp==0.6.2.post0
packaging==20.9
palettable==3.3.0
pandas==1.1.5
pandas-datareader==0.9.0
pandas-gbq==0.13.3
pandas-profiling==1.4.1
pandocfilters==1.4.3
panel==0.11.2
param==1.10.1
parso==0.8.2
pathlib==1.0.1
patsy==0.5.1
pexpect==4.8.0
pickleshare==0.7.5
Pillow==7.1.2
pip-tools==4.5.1
plac==1.1.3
plotly==4.4.1
plotnine==0.6.0
pluggy==0.7.1
pooch==1.3.0
portpicker==1.3.1
prefetch-generator==1.0.1
preshed==3.0.5
prettytable==2.1.0
progressbar2==3.38.0
prometheus-client==0.10.1
promise==2.3
prompt-toolkit==1.0.18
protobuf==3.12.4
psutil==5.4.8
psycopg2==2.7.6.1
ptyprocess==0.7.0
py==1.10.0
pyarrow==3.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycocotools==2.0.2
pycparser==2.20
pyct==0.4.8
pydata-google-auth==1.1.0
pydot==1.3.0
pydot-ng==2.0.0
pydotplus==2.0.2
PyDrive==1.3.1
pyemd==0.5.1
pyerfa==1.7.2
pyglet==1.5.0
Pygments==2.6.1
pygobject==3.26.1
pymc3==3.7
PyMeeus==0.5.11
pymongo==3.11.3
pymystem3==0.2.0
PyOpenGL==3.1.5
pyparsing==2.4.7
pyrsistent==0.17.3
pysndfile==1.3.8
PySocks==1.7.1
pystan==2.19.1.1
pytest==3.6.4
python-apt==0.0.0
python-chess==0.23.11
python-dateutil==2.8.1
python-louvain==0.15
python-slugify==4.0.1
python-utils==2.5.6
pytz==2018.9
pyviz-comms==2.0.1
PyWavelets==1.1.1
PyYAML==3.13
pyzmq==22.0.3
qdldl==0.1.5.post0
qtconsole==5.0.3
QtPy==1.9.0
regex==2019.12.20
requests==2.23.0
requests-oauthlib==1.3.0
resampy==0.2.2
retrying==1.3.3
rpy2==3.4.3
rsa==4.7.2
scikit-image==0.16.2
scikit-learn==0.22.2.post1
scipy==1.4.1
screen-resolution-extra==0.0.0
scs==2.1.3
seaborn==0.11.1
Send2Trash==1.5.0
setuptools-git==1.2
Shapely==1.7.1
simplegeneric==0.8.1
six==1.15.0
sklearn==0.0
sklearn-pandas==1.8.0
smart-open==5.0.0
snowballstemmer==2.1.0
sortedcontainers==2.3.0
SoundFile==0.10.3.post1
spacy==2.2.4
Sphinx==1.8.5
sphinxcontrib-serializinghtml==1.1.4
sphinxcontrib-websupport==1.2.4
SQLAlchemy==1.4.7
sqlparse==0.4.1
srsly==1.0.5
statsmodels==0.10.2
sympy==1.7.1
tables==3.4.4
tabulate==0.8.9
tblib==1.7.0
tensorboard==2.4.1
tensorboard-plugin-wit==1.8.0
tensorflow==2.4.1
tensorflow-datasets==4.0.1
tensorflow-estimator==2.4.0
tensorflow-gcs-config==2.4.0
tensorflow-hub==0.12.0
tensorflow-metadata==0.29.0
tensorflow-probability==0.12.1
termcolor==1.1.0
terminado==0.9.4
testpath==0.4.4
text-unidecode==1.3
textblob==0.15.3
textgenrnn==1.4.1
Theano==1.0.5
thinc==7.4.0
tifffile==2021.4.8
toml==0.10.2
toolz==0.11.1
torch==1.8.1+cu101
torchsummary==1.5.1
torchtext==0.9.1
torchvision==0.9.1+cu101
tornado==5.1.1
tqdm==4.41.1
traitlets==5.0.5
tweepy==3.10.0
typeguard==2.7.1
typing-extensions==3.7.4.3
tzlocal==1.5.1
uritemplate==3.0.1
urllib3==1.24.3
vega-datasets==0.9.0
wasabi==0.8.2
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==1.0.1
widgetsnbextension==3.5.1
wordcloud==1.5.0
wrapt==1.12.1
xarray==0.15.1
xgboost==0.90
xkit==0.0.0
xlrd==1.1.0
xlwt==1.3.0
yellowbrick==0.9.1
zict==2.0.0
zipp==3.4.1
###Output
_____no_output_____
###Markdown
pip install absl-py==0.12.0 alabaster==0.7.12 altair==4.1.0 appdirs==1.4.4 argon2-cffi==20.1.0 astor==0.8.1 astropy==4.2.1 astunparse==1.6.3 async-generator==1.10 atari-py==0.2.6 atomicwrites==1.4.0 attrs==20.3.0 audioread==2.1.9 autograd==1.3 Babel==2.9.0 backcall==0.2.0 blis==0.4.1 bokeh==2.3.1 Bottleneck==1.3.2 branca==0.4.2 catalogue==1.0.0 certifi==2020.12.5 cffi==1.14.5 chainer==7.4.0 chardet==3.0.4 click==7.1.2 cloudpickle==1.3.0 cmake==3.12.0 cmdstanpy==0.9.5 colorcet==2.0.6 colorlover==0.3.0 community==1.0.0b1 contextlib2==0.5.5 convertdate==2.3.2 coverage==3.7.1 coveralls==0.5 crcmod==1.7 cufflinks==0.17.3 cupy-cuda101==7.4.0 cvxopt==1.2.6 cvxpy==1.0.31 cycler==0.10.0 cymem==2.0.5 Cython==0.29.22 daft==0.0.4 dask==2.12.0 debugpy==1.0.0 decorator==4.4.2 defusedxml==0.7.1 descartes==1.1.0 dill==0.3.3 distributed==1.25.3 dlib==19.18.0 dm-tree==0.1.6 docopt==0.6.2 docutils==0.17 dopamine-rl==1.0.5 easydict==1.9 ecos==2.0.7.post1 editdistance==0.5.3 entrypoints==0.3 ephem==3.7.7.1 et-xmlfile==1.0.1 fa2==0.3.5 fancyimpute==0.4.3 fastprogress==1.0.0 fastrlock==0.6 fbprophet==0.7.1 feather-format==0.4.1 filelock==3.0.12 firebase-admin==4.4.0 fix-yahoo-finance==0.0.22 Flask==1.1.2 flatbuffers==1.12 folium future==0.16.0 gast==0.3.3 GDAL==2.2.2 gdown==3.6.4 gensim==3.6.0 geographiclib==1.50 geopy==1.17.0 gin-config==0.4.0 glob2==0.7 google==2.0.3 graphviz==0.10.1 greenlet==1.0.0 grpcio==1.32.0 gspread==3.0.1 gspread-dataframe==3.0.8 gym==0.17.3 h5py==2.10.0 HeapDict==1.0.1 hijri-converter==2.1.1 holidays==0.10.5.2 holoviews==1.14.3 html5lib==1.0.1 httpimport==0.5.18 httplib2==0.17.4 httplib2shim==0.0.3 humanize==0.5.1 hyperopt==0.1.2 idna==2.10 imageio==2.4.1 imagesize==1.2.0 imbalanced-learn==0.4.3 imblearn==0.0 importlib-metadata==3.10.1 importlib-resources==5.1.2 imutils==0.5.4 inflect==2.1.0 iniconfig==1.1.1 intel-openmp==2021.2.0 intervaltree==2.1.0 ipython==5.5.0 ipython-genutils==0.2.0 ipython-sql==0.3.9 ipywidgets==7.6.3 itsdangerous==1.1.0 jax==0.2.12 jdcal==1.4.1 jedi==0.18.0 jieba==0.42.1 Jinja2==2.11.3 joblib==1.0.1 jpeg4py==0.1.4 jsonschema==2.6.0 jupyter==1.0.0 jupyter-core==4.7.1 jupyterlab-pygments==0.1.2 jupyterlab-widgets==1.0.0 kaggle==1.5.12 kapre==0.1.3.1 Keras==2.4.3 Keras-Preprocessing==1.1.2 keras-vis==0.4.1 kiwisolver==1.3.1 knnimpute==0.1.0 korean-lunar-calendar==0.2.1 librosa==0.8.0 lightgbm==2.2.3 llvmlite==0.34.0 lmdb==0.99 LunarCalendar==0.0.9 lxml==4.2.6 Markdown==3.3.4 MarkupSafe==1.1.1 matplotlib==3.2.2 matplotlib-venn==0.11.6 missingno==0.4.2 mistune==0.8.4 mizani==0.6.0 mkl==2019.0 mlxtend==0.14.0 more-itertools==8.7.0 moviepy==0.2.3.5 mpmath==1.2.1 msgpack==1.0.2 multiprocess==0.70.11.1 multitasking==0.0.9 murmurhash==1.0.5 music21==5.5.0 natsort==5.5.0 nbconvert==5.6.1 nbformat==5.1.3 nest-asyncio==1.5.1 networkx==2.5.1 nibabel==3.0.2 nltk==3.2.5 notebook==5.3.1 np-utils==0.5.12.1 numba==0.51.2 numexpr==2.7.3 numpy==1.19.5 nvidia-ml-py3==7.352.0 oauth2client==4.1.3 oauthlib==3.1.0 okgrade==0.4.3 opencv-contrib-python==4.1.2.30 opencv-python==4.1.2.30 openpyxl==2.5.9 opt-einsum==3.3.0 osqp==0.6.2.post0 packaging==20.9 palettable==3.3.0 pandas==1.1.5 pandas-datareader==0.9.0 pandas-gbq==0.13.3 pandas-profiling==1.4.1 pandocfilters==1.4.3 panel==0.11.2 param==1.10.1 parso==0.8.2 pathlib==1.0.1 patsy==0.5.1 pexpect==4.8.0 pickleshare==0.7.5 Pillow==7.1.2 pip-tools==4.5.1 plac==1.1.3 plotly==4.4.1 plotnine==0.6.0 pluggy==0.7.1 pooch==1.3.0 portpicker==1.3.1 prefetch-generator==1.0.1 preshed==3.0.5 prettytable==2.1.0 progressbar2==3.38.0 prometheus-client==0.10.1 promise==2.3 prompt-toolkit==1.0.18 protobuf==3.12.4 psutil==5.4.8 psycopg2==2.7.6.1 ptyprocess==0.7.0 py==1.10.0 pyarrow==3.0.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycocotools==2.0.2 pycparser==2.20 pyct==0.4.8 pydata-google-auth==1.1.0 pydot==1.3.0 pydot-ng==2.0.0 pydotplus==2.0.2 PyDrive==1.3.1 pyemd==0.5.1 pyerfa==1.7.2 pyglet==1.5.0 Pygments==2.6.1 pygobject pymc3==3.7 PyMeeus==0.5.11 pymongo==3.11.3 pymystem3==0.2.0 PyOpenGL==3.1.5 pyparsing==2.4.7 pyrsistent==0.17.3 pysndfile==1.3.8 PySocks==1.7.1 pystan==2.19.1.1 pytest==3.6.4 python-apt==0.0.0 python-chess==0.23.11 python-dateutil==2.8.1 python-louvain==0.15 python-slugify==4.0.1 python-utils==2.5.6 pytz==2018.9 pyviz-comms==2.0.1 PyWavelets==1.1.1 PyYAML==3.13 pyzmq==22.0.3 qdldl==0.1.5.post0 qtconsole==5.0.3 QtPy==1.9.0 regex==2019.12.20 requests==2.23.0 requests-oauthlib==1.3.0 resampy==0.2.2 retrying==1.3.3 rpy2==3.4.3 rsa==4.7.2 scikit-image==0.16.2 scikit-learn==0.22.2.post1 scipy==1.4.1 scs==2.1.3 seaborn==0.11.1 Send2Trash==1.5.0 setuptools-git==1.2 Shapely==1.7.1 simplegeneric==0.8.1 six==1.15.0 sklearn==0.0 sklearn-pandas==1.8.0 smart-open==5.0.0 snowballstemmer==2.1.0 sortedcontainers==2.3.0 SoundFile==0.10.3.post1 spacy==2.2.4 Sphinx==1.8.5 sphinxcontrib-serializinghtml==1.1.4 sphinxcontrib-websupport==1.2.4 SQLAlchemy==1.4.7 sqlparse==0.4.1 srsly==1.0.5 statsmodels==0.10.2 sympy==1.7.1 tables==3.4.4 tabulate==0.8.9 tblib==1.7.0 tensorboard==2.4.1 tensorboard-plugin-wit==1.8.0 tensorflow==2.4.1 tensorflow-datasets==4.0.1 tensorflow-estimator==2.4.0 tensorflow-gcs-config==2.4.0 tensorflow-hub==0.12.0 tensorflow-metadata==0.29.0 tensorflow-probability==0.12.1 termcolor==1.1.0 terminado==0.9.4 testpath==0.4.4 text-unidecode==1.3 textblob==0.15.3 textgenrnn==1.4.1 Theano==1.0.5 thinc==7.4.0 tifffile==2021.4.8 toml==0.10.2 toolz==0.11.1 torchsummary==1.5.1 tornado==5.1.1 tqdm==4.41.1 traitlets==5.0.5 tweepy==3.10.0 typeguard==2.7.1 typing-extensions==3.7.4.3 tzlocal==1.5.1 uritemplate==3.0.1 urllib3==1.24.3 vega-datasets==0.9.0 wasabi==0.8.2 wcwidth==0.2.5 webencodings==0.5.1 Werkzeug==1.0.1 widgetsnbextension==3.5.1 wordcloud==1.5.0 wrapt==1.12.1 xarray==0.15.1 xlrd==1.1.0 xlwt==1.3.0 yellowbrick==0.9.1 zict==2.0.0 zipp==3.4.1 ipykernel jupyter-client jupyter-console nbclient
###Code
datascience==0.10.6 folium==0.8.3 google-api-core==1.26.3 google-api-python-client==1.12.8 google-auth==1.28.1 google-auth-httplib2==0.0.4 google-auth-oauthlib==0.4.4 google-cloud-bigquery==1.21.0 google-cloud-bigquery-storage==1.1.0 google-cloud-core==1.0.3 google-cloud-datastore==1.8.0 google-cloud-firestore==1.7.0 google-cloud-language==1.2.0 google-cloud-storage==1.18.1 google-cloud-translate==1.5.0 google-colab==1.0.0 google-pasta==0.2.0 google-resumable-media==0.4.1 googleapis-common-protos==1.53.0 googledrivedownloader==0.4 earthengine-api==0.1.260
###Output
_____no_output_____ |
Code/Fig_4_5_experimental_data.ipynb | ###Markdown
Loica and Flapjack setup
###Code
!pip install git+https://github.com/SynBioUC/flapjack.git --quiet
#uncomment when this work
!pip install git+https://github.com/SynBioUC/LOICA.git --quiet
from google.colab import drive
drive.mount("/content/gdrive")
% cd /content/gdrive/My Drive/
#uncomment if you dont have LOICA cloned in your drive or to update it
#!git clone https://github.com/SynBioUC/LOICA.git
% cd LOICA/
#!pip install -e .
from flapjack import *
from loica import *
import numpy as np
import getpass
import datetime
import random as rd
import pandas as pd
from numpy.fft import fft, ifft, fftfreq
from scipy.interpolate import interp1d, UnivariateSpline
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_poisson_deviance
from sklearn.metrics import mean_gamma_deviance
from sklearn.metrics import mean_absolute_error
from scipy.signal import savgol_filter, medfilt
import matplotlib.pyplot as plt
import seaborn as sns
color_inverse = 'dodgerblue'
color_direct = 'orangered'
color_indirect ='gold'
%matplotlib inline
SMALL_SIZE = 6
MEDIUM_SIZE = 10
BIGGER_SIZE = 12
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=SMALL_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=SMALL_SIZE) # fontsize of the figure title
###Output
_____no_output_____
###Markdown
Login
###Code
user = input()
passwd = getpass.getpass()
fj = Flapjack('flapjack.rudge-lab.org:8000')
fj.log_in(username=user, password=passwd)
dna = fj.get('dna', name='Rep')
if len(dna)==0:
dna = fj.create('dna', name='Rep')
vector = fj.get('vector', name='Rep')
if len(vector)==0:
vector = fj.create('vector', name='Rep', dnas=dna.id)
cfp = fj.get('signal', name='CFP')
yfp = fj.get('signal', name='YFP')
rfp = fj.get('signal', name='RFP')
media = fj.get('media', name='Loica')
if len(media)==0:
media = fj.create('media', name='Loica', description='Simulated loica media')
strain = fj.get('strain', name='Loica strain')
if len(strain)==0:
strain = fj.create('strain', name='Loica strain', description='Loica test strain')
biomass_signal = fj.get('signal', name='OD')
media_id = fj.get('media', name='M9-glycerol').id
strain_id = fj.get('strain', name='Top10').id
peda_id = fj.get('vector', name='pEDA').id
pbaa_id = fj.get('vector', name='pBAA').id
pbca_id = fj.get('vector', name='pBCA').id
paaa_id = fj.get('vector', name='pAAA').id
pgaa_id = fj.get('vector', name='pGAA').id
rfp_id = fj.get('signal', name='RFP').id
yfp_id = fj.get('signal', name='YFP').id
cfp_id = fj.get('signal', name='CFP').id
od_id = fj.get('signal', name='OD').id
study_id = fj.get('study', search='context').id
df_direct = fj.analysis(study=study_id,
media=media_id,
strain=strain_id,
signal=yfp_id,
type='Expression Rate (direct)',
degr=0,
eps_L=1e-5,
biomass_signal=od_id,
)
df_ref = fj.analysis(study=study_id,
vector=paaa_id,
media=media_id,
strain=strain_id,
signal=rfp_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-2,
n_gaussians=24,
biomass_signal=od_id,
)
df = fj.analysis(study=study_id,
vector=pbaa_id,
media=media_id,
strain=strain_id,
signal=rfp_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-2,
n_gaussians=24,
biomass_signal=od_id,
)
df_indirect = fj.analysis(study=study_id,
media=media_id,
strain=strain_id,
signal=yfp_id,
type='Expression Rate (indirect)',
pre_smoothing=11,
post_smoothing=0,
biomass_signal=od_id,
)
###Output
100%|██████████| 100/100 [00:25<00:00, 3.85it/s]
###Markdown
pAAA
###Code
medias = ['M9-glycerol', 'M9-glucose']
strains = ['MG1655z1', 'Top10']
for media in medias:
for strain in strains:
media_id = fj.get('media', name=media).id
strain_id = fj.get('strain', name=strain).id
df_indirect = fj.analysis(
media=media_id,
study=study_id,
strain=strain_id,
vector=paaa_id,
type='Expression Rate (indirect)',
biomass_signal=od_id,
pre_smoothing=11,
post_smoothing=0,
#bg_correction=2,
#min_biomass=0.05,
#remove_data=False
)
df_direct = fj.analysis(study=study_id,
vector=paaa_id,
media=media_id,
strain=strain_id,
type='Expression Rate (direct)',
degr=0,
eps_L=1e-5,
biomass_signal=od_id,
)
df_inverse = fj.analysis(study=study_id,
vector=paaa_id,
media=media_id,
strain=strain_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-2,
n_gaussians=24,
biomass_signal=od_id,
)
signals = ['OD', 'RFP', 'YFP', 'CFP']
titles = ['Growth', 'RFP', 'YFP', 'CFP']
colors = ['k', 'r', 'g', 'b']
w = 3.16 #3.3
fig,axs = plt.subplots(2,2,figsize=(w, w* 0.75), sharex=True)
for sig,ax,title,color in zip(signals, axs.ravel(), titles, colors):
rfp_direct = df_direct[df_direct.Signal==sig].groupby('Time').mean().Rate
t_direct = df_direct[df_direct.Signal==sig].groupby('Time').mean().index
rfp_direct_std = df_direct[df_direct.Signal==sig].groupby('Time').std().Rate
rfp_inverse = df_inverse[df_inverse.Signal==sig].groupby('Time').mean().Rate
t_inverse = df_inverse[df_inverse.Signal==sig].groupby('Time').mean().index
rfp_inverse_std = df_inverse[df_inverse.Signal==sig].groupby('Time').std().Rate
rfp_indirect = df_indirect[df_indirect.Signal==sig].groupby('Time').mean().Rate
t_indirect = df_indirect[df_indirect.Signal==sig].groupby('Time').mean().index
ax.plot(rfp_indirect, color=color_indirect, linestyle='-', linewidth='0.5')
ax.plot(rfp_direct, color=color_direct, linestyle='-', linewidth='0.5')
#plt.fill_between(t_direct, rfp_direct-rfp_direct_std, rfp_direct+rfp_direct_std, color='red', alpha=0.2)
ax.plot(rfp_inverse, color=color_inverse, linestyle='-', linewidth='0.5')
#plt.fill_between(t_inverse, rfp_inverse-rfp_inverse_std, rfp_inverse+rfp_inverse_std, color='blue', alpha=0.2)
#plt.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
ax.set_xticks([0,12,24])
ax.set_ylabel('Expr. rate (AU/h)')
ax.set_ylim(-0.5, rfp_inverse.max()*1.2)
#ax.set_title(title)
ax.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
#plt.suptitle(f'{media}, {strain}')
axs[0,0].set_ylabel(r'Growth rate ($h^{-1}$)')
axs[1,0].set_xlabel('Time (h)')
axs[1,1].set_xlabel('Time (h)')
#plt.legend(['Direct', 'Inverse'])
plt.tight_layout()
plt.subplots_adjust(top=0.9)
plt.savefig(f'pAAA_{media}_{strain}_subplots.png', dpi=300)
rfp_inverse.max()
###Output
_____no_output_____
###Markdown
Context
###Code
prom_map = {
'A': 'J23101',
'B': 'J23106',
'C': 'J23107',
'D': 'R0011',
'E': 'R0040',
'F': 'pLas81',
'G': 'pLux76'
}
###Output
_____no_output_____
###Markdown
Direct YFP profiles
###Code
yfp_vectors = [
['pBFA', 'pEFA', 'pGFA'],
['pBDA', 'pEDA', 'pGDA'],
['pBCA', 'pECA', 'pGCA'],
['pAAA', 'pBAA', 'pEAA', 'pGAA']
]
yfp_vector_ids = [[fj.get('vector', name=name).id[0] for name in vecs] for vecs in yfp_vectors]
yfp_id = fj.get('signal', name='YFP').id
medias = ['M9-glycerol', 'M9-glucose']
strains = ['Top10', 'MG1655z1']
# YFP figures
for media in medias:
for strain in strains:
print(media, strain)
media_id = fj.get('media', name=media).id
strain_id = fj.get('strain', name=strain).id
df_ref = fj.analysis(vector=paaa_id,
media=media_id,
strain=strain_id,
signal=yfp_id,
type='Expression Rate (direct)',
degr=0,
eps_L=1e-5,
n_gaussians=24,
biomass_signal=od_id,
)
df_ref_gr = fj.analysis(vector=paaa_id,
media=media_id,
strain=strain_id,
signal=od_id,
type='Expression Rate (direct)',
degr=0,
eps_L=1e-5,
n_gaussians=24,
biomass_signal=od_id)
mdf_ref_gr = df_ref_gr.groupby('Time').mean()
ref_grt = mdf_ref_gr.index
ref_gr = mdf_ref_gr.Rate
ref_pk_idx = np.where(ref_gr==ref_gr.max())[0][0]
ref_pk_time = ref_grt[ref_pk_idx]
print('ref_pk_time ', ref_pk_time)
for vi,vector_id in enumerate(yfp_vector_ids):
df = fj.analysis(vector=vector_id,
media=media_id,
strain=strain_id,
signal=yfp_id,
type='Expression Rate (direct)',
degr=0,
eps_L=1e-5,
n_gaussians=24,
biomass_signal=od_id)
plt.figure(figsize=(1.5,1.25))
fname = '-'.join([media, strain, yfp_vectors[vi][0][2], '-direct-YFP.png'])
for name,vec in df.groupby('Vector'):
print(name)
yfp = vec.groupby('Time').mean().Rate
yfpt = vec.groupby('Time').mean().index
df_gr = fj.analysis(vector=fj.get('vector', name=name).id,
media=media_id,
strain=strain_id,
signal=od_id,
type='Expression Rate (direct)',
degr=0,
eps_L=1e-5,
n_gaussians=24,
biomass_signal=od_id)
mdf_gr = df_gr.groupby('Time').mean()
grt = mdf_gr.index
gr = mdf_gr.Rate
pk_idx = np.where(gr==gr.max())[0][0]
pk_time = grt[pk_idx]
print(pk_time)
plt.plot(yfpt - pk_time, (yfp-yfp.mean()) / yfp.std(), linewidth=0.5)
yfp_ref = df_ref.groupby('Time').mean().Rate
tref = df_ref.groupby('Time').mean().index
plt.plot(tref - ref_pk_time, (yfp_ref-yfp_ref.mean()) / yfp_ref.std(), 'k--', linewidth=0.5)
plt.title(f'{media}, {strain}')
#plt.legend([prom_map[vec[1]] for vec in yfp_vectors])
plt.tight_layout()
#fig = flapjack.layout_print(fig, width=1.5, height=1.25)
#fig.update_yaxes(title='')
#fig.update_xaxes(title='')
#fig.layout.annotations[0].update(text=f'{media}, {strain}')
#for vec in yfp_vectors[vi]:
# rfp_code = vec[1]
# fig.update_traces(name=prom_map[rfp_code], selector=dict(name=vec))
#io.write_image(fig, fname)
plt.savefig(fname, dpi=300)
yfp_vectors = [
['pBFA', 'pEFA', 'pGFA'],
['pBDA', 'pEDA', 'pGDA'],
['pBCA', 'pECA', 'pGCA'],
['pAAA', 'pBAA', 'pEAA', 'pGAA']]
for vectors in yfp_vectors:
print(vectors)
plt.figure()
for v in vectors:
plt.plot(0,0)
plt.legend([prom_map[vec[1]] for vec in vectors])
plt.savefig(f'legend-{vectors[0][2]}-YFP.png', dpi=300)
###Output
['pBFA', 'pEFA', 'pGFA']
['pBDA', 'pEDA', 'pGDA']
['pBCA', 'pECA', 'pGCA']
['pAAA', 'pBAA', 'pEAA', 'pGAA']
###Markdown
Direct RFP profiles
###Code
rfp_vectors = [
['pBAA', 'pBCA', 'pBDA', 'pBFA'],
['pEAA', 'pECA', 'pEDA', 'pEFA'],
['pGAA', 'pGCA', 'pGDA', 'pGEA', 'pGFA']
]
rfp_vector_ids = [[fj.get('vector', name=name).id[0] for name in vecs] for vecs in rfp_vectors]
rfp_id = fj.get('signal', name='RFP').id
medias = ['M9-glucose', 'M9-glycerol']
strains = ['MG1655z1', 'Top10']
# RFP figures
for media in medias:
for strain in strains:
print(media, strain)
media_id = fj.get('media', name=media).id
strain_id = fj.get('strain', name=strain).id
df_ref = fj.analysis(vector=paaa_id,
media=media_id,
strain=strain_id,
signal=rfp_id,
type='Expression Rate (direct)',
degr=0,
eps_L=1e-5,
n_gaussians=24,
biomass_signal=od_id,
)
df_ref_gr = fj.analysis(vector=paaa_id,
media=media_id,
strain=strain_id,
signal=od_id,
type='Expression Rate (direct)',
degr=0,
eps_L=1e-5,
n_gaussians=24,
biomass_signal=od_id)
mdf_ref_gr = df_ref_gr.groupby('Time').mean()
ref_grt = mdf_ref_gr.index
ref_gr = mdf_ref_gr.Rate
ref_pk_idx = np.where(ref_gr==ref_gr.max())[0][0]
ref_pk_time = ref_grt[ref_pk_idx]
print('ref_pk_time ', ref_pk_time)
for vi,vector_id in enumerate(rfp_vector_ids):
df = fj.analysis(vector=vector_id,
media=media_id,
strain=strain_id,
signal=rfp_id,
type='Expression Rate (direct)',
degr=0,
eps_L=1e-5,
n_gaussians=24,
biomass_signal=od_id)
plt.figure(figsize=(1.5,1.25))
fname = '-'.join([media, strain, rfp_vectors[vi][0][1], '-direct-RFP.png'])
for name,vec in df.groupby('Vector'):
print(name)
rfp = vec.groupby('Time').mean().Rate
rfpt = vec.groupby('Time').mean().index
df_gr = fj.analysis(vector=fj.get('vector', name=name).id,
media=media_id,
strain=strain_id,
signal=od_id,
type='Expression Rate (direct)',
degr=0,
eps_L=1e-5,
n_gaussians=24,
biomass_signal=od_id)
mdf_gr = df_gr.groupby('Time').mean()
grt = mdf_gr.index
gr = mdf_gr.Rate
pk_idx = np.where(gr==gr.max())[0][0]
pk_time = grt[pk_idx]
print(pk_time)
plt.plot(rfpt - pk_time, (rfp-rfp.mean()) / rfp.std(), linewidth=0.5)
rfp_ref = df_ref.groupby('Time').mean().Rate
tref = df_ref.groupby('Time').mean().index
plt.plot(tref - ref_pk_time, (rfp_ref-rfp_ref.mean()) / rfp_ref.std(), 'k--', linewidth=0.5)
plt.title(f'{media}, {strain}')
plt.tight_layout()
#ax.set_ylim([0,1])
#ax.set_xticks([0,12,24])
#ax.set_yticks([0,0.5,1])
#fig = flapjack.layout_print(fig, width=1.5, height=1.25)
#fig.update_yaxes(title='')
#fig.update_xaxes(title='')
#fig.layout.annotations[0].update(text=f'{media}, {strain}')
#for vec in yfp_vectors[vi]:
# rfp_code = vec[1]
# fig.update_traces(name=prom_map[rfp_code], selector=dict(name=vec))
#io.write_image(fig, fname)
plt.savefig(fname, dpi=300)
rfp_vectors = [
['pBAA', 'pBCA', 'pBDA', 'pBFA'],
['pEAA', 'pECA', 'pEDA', 'pEFA'],
['pGAA', 'pGCA', 'pGDA', 'pGEA', 'pGFA']
]
for vectors in rfp_vectors:
print(vectors)
plt.figure()
for v in vectors:
plt.plot(0,0)
plt.legend([prom_map[vec[2]] for vec in vectors])
plt.savefig(f'legend-{vectors[0][1]}-RFP.png', dpi=300)
###Output
['pBAA', 'pBCA', 'pBDA', 'pBFA']
['pEAA', 'pECA', 'pEDA', 'pEFA']
['pGAA', 'pGCA', 'pGDA', 'pGEA', 'pGFA']
###Markdown
Inverse YFP profilesChange direct to inverse, change eps_L for eps, did I need to change eps -3?
###Code
yfp_vectors = [
['pBFA', 'pEFA', 'pGFA'],
#['pBDA', 'pEDA', 'pGDA'],
#['pBCA', 'pECA', 'pGCA'],
#['pAAA', 'pBAA', 'pEAA', 'pGAA']
]
yfp_vector_ids = [[fj.get('vector', name=name).id[0] for name in vecs] for vecs in yfp_vectors]
yfp_id = fj.get('signal', name='YFP').id
medias = ['M9-glycerol'] #, 'M9-glucose']
strains = ['Top10'] #, 'MG1655z1']
# YFP figures
for media in medias:
for strain in strains:
print(media, strain)
media_id = fj.get('media', name=media).id
strain_id = fj.get('strain', name=strain).id
df_ref = fj.analysis(vector=paaa_id,
media=media_id,
strain=strain_id,
signal=yfp_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-2,
n_gaussians=24,
biomass_signal=od_id,
)
df_ref_gr = fj.analysis(vector=paaa_id,
media=media_id,
strain=strain_id,
signal=od_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-2,
n_gaussians=24,
biomass_signal=od_id)
mdf_ref_gr = df_ref_gr.groupby('Time').mean()
ref_grt = mdf_ref_gr.index
ref_gr = mdf_ref_gr.Rate
ref_pk_idx = np.where(ref_gr==ref_gr.max())[0][0]
ref_pk_time = ref_grt[ref_pk_idx]
print('ref_pk_time ', ref_pk_time)
for vi,vector_id in enumerate(yfp_vector_ids):
df = fj.analysis(vector=vector_id,
media=media_id,
strain=strain_id,
signal=[yfp_id, cfp_id],
type='Expression Rate (inverse)',
degr=0,
eps=1e-2,
n_gaussians=24,
biomass_signal=od_id)
plt.figure(figsize=(1.5,1.25))
fname = '-'.join([media, strain, yfp_vectors[vi][0][2], '-inverse-YFP.png'])
for name,vec in df.groupby('Vector'):
print(name)
yfp = vec[vec.Signal=='YFP'].groupby('Time').mean().Rate
cfp = vec[vec.Signal=='CFP'].groupby('Time').mean().Rate
yfpt = vec[vec.Signal=='YFP'].groupby('Time').mean().index
df_gr = fj.analysis(vector=fj.get('vector', name=name).id,
media=media_id,
strain=strain_id,
signal=od_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-2,
n_gaussians=24,
biomass_signal=od_id)
mdf_gr = df_gr.groupby('Time').mean()
grt = mdf_gr.index
gr = mdf_gr.Rate
pk_idx = np.where(gr==gr.max())[0][0]
pk_time = grt[pk_idx]
print(pk_time)
#plt.plot(yfpt - pk_time, (yfp-yfp.mean()) / yfp.std(), linewidth=0.5)
plt.plot(yfpt - pk_time, yfp/cfp.mean(), linewidth=0.5)
yfp_ref = df_ref.groupby('Time').mean().Rate
tref = df_ref.groupby('Time').mean().index
#plt.plot(tref - ref_pk_time, (yfp_ref-yfp_ref.mean()) / yfp_ref.std(), 'k--', linewidth=0.5)
plt.title(f'{media}, {strain}')
plt.tight_layout()
#fig = flapjack.layout_print(fig, width=1.5, height=1.25)
#fig.update_yaxes(title='')
#fig.update_xaxes(title='')
#fig.layout.annotations[0].update(text=f'{media}, {strain}')
#for vec in yfp_vectors[vi]:
# rfp_code = vec[1]
# fig.update_traces(name=prom_map[rfp_code], selector=dict(name=vec))
#io.write_image(fig, fname)
plt.savefig(fname, dpi=300)
###Output
M9-glycerol Top10
###Markdown
Inverse RFP profiles
###Code
rfp_vectors = [
['pBAA', 'pBCA', 'pBDA', 'pBFA'],
['pEAA', 'pECA', 'pEDA', 'pEFA'],
['pGAA', 'pGCA', 'pGDA', 'pGEA', 'pGFA']
]
rfp_vector_ids = [[fj.get('vector', name=name).id[0] for name in vecs] for vecs in rfp_vectors]
rfp_id = fj.get('signal', name='RFP').id
medias = ['M9-glucose', 'M9-glycerol']
strains = ['MG1655z1', 'Top10']
# RFP figures
for media in medias:
for strain in strains:
print(media, strain)
media_id = fj.get('media', name=media).id
strain_id = fj.get('strain', name=strain).id
df_ref = fj.analysis(vector=paaa_id,
media=media_id,
strain=strain_id,
signal=rfp_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-5,
n_gaussians=24,
biomass_signal=od_id,
)
df_ref_gr = fj.analysis(vector=paaa_id,
media=media_id,
strain=strain_id,
signal=od_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-5,
n_gaussians=24,
biomass_signal=od_id)
mdf_ref_gr = df_ref_gr.groupby('Time').mean()
ref_grt = mdf_ref_gr.index
ref_gr = mdf_ref_gr.Rate
ref_pk_idx = np.where(ref_gr==ref_gr.max())[0][0]
ref_pk_time = ref_grt[ref_pk_idx]
print('ref_pk_time ', ref_pk_time)
for vi,vector_id in enumerate(rfp_vector_ids):
df = fj.analysis(vector=vector_id,
media=media_id,
strain=strain_id,
signal=rfp_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-5,
n_gaussians=24,
biomass_signal=od_id)
plt.figure(figsize=(1.5,1.25))
fname = '-'.join([media, strain, rfp_vectors[vi][0][1], '-inverse-RFP.png'])
for name,vec in df.groupby('Vector'):
print(name)
rfp = vec.groupby('Time').mean().Rate
rfpt = vec.groupby('Time').mean().index
df_gr = fj.analysis(vector=fj.get('vector', name=name).id,
media=media_id,
strain=strain_id,
signal=od_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-5,
n_gaussians=24,
biomass_signal=od_id)
mdf_gr = df_gr.groupby('Time').mean()
grt = mdf_gr.index
gr = mdf_gr.Rate
pk_idx = np.where(gr==gr.max())[0][0]
pk_time = grt[pk_idx]
print(pk_time)
plt.plot(rfpt - pk_time, (rfp-rfp.mean()) / rfp.std(), linewidth=0.5)
rfp_ref = df_ref.groupby('Time').mean().Rate
tref = df_ref.groupby('Time').mean().index
plt.plot(tref - ref_pk_time, (rfp_ref-rfp_ref.mean()) / rfp_ref.std(), 'k--', linewidth=0.5)
plt.title(f'{media}, {strain}')
plt.tight_layout()
#fig = flapjack.layout_print(fig, width=1.5, height=1.25)
#fig.update_yaxes(title='')
#fig.update_xaxes(title='')
#fig.layout.annotations[0].update(text=f'{media}, {strain}')
#for vec in yfp_vectors[vi]:
# rfp_code = vec[1]
# fig.update_traces(name=prom_map[rfp_code], selector=dict(name=vec))
#io.write_image(fig, fname)
plt.savefig(fname, dpi=300)
###Output
M9-glucose MG1655z1
###Markdown
Inverse all CFP profiles
###Code
medias = ['M9-glycerol','M9-glucose']
strains = ['Top10', 'MG1655z1']
cfp_id = fj.get('signal', name='CFP').id
for media in medias:
for strain in strains:
media_id = fj.get('media', name=media).id
strain_id = fj.get('strain', name=strain).id
df = fj.analysis(study=study_id,
signal=cfp_id,
media=media_id,
strain=strain_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-2,
n_gaussians=24,
biomass_signal=od_id)
plt.figure(figsize=(1.5,1.25))
for name,vec in df.groupby('Vector'):
cfp = vec.groupby('Time').mean().Rate
cfpt = vec.groupby('Time').mean().index
df_gr = fj.analysis(vector=fj.get('vector', name=name).id,
media=media_id,
strain=strain_id,
signal=od_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-2,
n_gaussians=24,
biomass_signal=od_id)
mdf_gr = df_gr.groupby('Time').mean()
grt = mdf_gr.index
gr = mdf_gr.Rate
pk_idx = np.where(gr==gr.max())[0][0]
pk_time = grt[pk_idx]
print(pk_time)
plt.plot(cfpt - pk_time, (cfp-cfp.mean()) / cfp.std(), linewidth=0.5, color='blue', alpha=0.2)
plt.title(f'{media}, {strain}')
plt.tight_layout()
#fig = flapjack.layout_print(fig, width=1.5, height=1.25)
#fig.update_traces(showlegend=False, line=dict(color='rgba(0, 0, 255, 0.2)'))
#fig.update_yaxes(title='')
#fig.update_xaxes(title='')
#fig.layout.annotations[0].update(text=f'{media}, {strain}')
fname = fname = '-'.join([media, strain, 'CFP.png'])
#io.write_image(fig, fname)
plt.savefig(fname, dpi=300)
###Output
_____no_output_____
###Markdown
Growth
###Code
medias = ['M9-glycerol', 'M9-glucose']
strains = ['Top10', 'MG1655z1']
cfp_id = fj.get('signal', name='CFP').id
for media in medias:
for strain in strains:
media_id = fj.get('media', name=media).id
strain_id = fj.get('strain', name=strain).id
df_ref_gr = fj.analysis(vector=paaa_id,
media=media_id,
strain=strain_id,
signal=od_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-2,
n_gaussians=24,
biomass_signal=od_id)
mdf_ref_gr = df_ref_gr.groupby('Time').mean()
ref_grt = mdf_ref_gr.index
ref_gr = mdf_ref_gr.Rate
ref_pk_idx = np.where(ref_gr==ref_gr.max())[0][0]
ref_pk_time = ref_grt[ref_pk_idx]
print('ref_pk_time ', ref_pk_time)
#for vi,vector_id in enumerate(yfp_vector_ids):
fname = '-'.join([media, strain, '-inverse-gr.png'])
#for name,vec in df.groupby('Vector'):
#print(name)
df_gr = fj.analysis(vector=fj.get('vector', name=name).id,
media=media_id,
strain=strain_id,
signal=od_id,
type='Expression Rate (inverse)',
degr=0,
eps=1e-2,
n_gaussians=24,
biomass_signal=od_id)
mdf_gr = df_gr.groupby('Time').mean()
grt = mdf_gr.index
gr = mdf_gr.Rate
pk_idx = np.where(gr==gr.max())[0][0]
pk_time = grt[pk_idx]
print(pk_time)
#yfp = vec.groupby('Time').mean().Rate
#yfpt = vec.groupby('Time').mean().index
yfp = df_gr.groupby('Time').mean().Rate
yfpt = df_gr.groupby('Time').mean().index
plt.plot(yfpt - pk_time, (yfp-yfp.mean()) / yfp.std(), linewidth=0.5)
#yfp_ref = df_ref.groupby('Time').mean().Rate
#tref = df_ref.groupby('Time').mean().index
yfp_ref = df_ref_gr.groupby('Time').mean().Rate
tref = df_ref_gr.groupby('Time').mean().index
plt.plot(tref - ref_pk_time, (yfp_ref-yfp_ref.mean()) / yfp_ref.std(), 'k--', linewidth=0.5)
plt.title(f'{media}, {strain}')
plt.tight_layout()
#fig = flapjack.layout_print(fig, width=1.5, height=1.25)
#fig.update_yaxes(title='')
#fig.update_xaxes(title='')
#fig.layout.annotations[0].update(text=f'{media}, {strain}')
#for vec in yfp_vectors[vi]:
# rfp_code = vec[1]
# fig.update_traces(name=prom_map[rfp_code], selector=dict(name=vec))
#io.write_image(fig, fname)
plt.savefig(fname, dpi=300)
###Output
_____no_output_____ |
notebooks/crispr/Dual CRISPR 5-Count Plots.ipynb | ###Markdown
Dual CRISPR Screen Analysis Count PlotsAmanda Birmingham, CCBB, UCSD ([email protected]) InstructionsTo run this notebook reproducibly, follow these steps:1. Click **Kernel** > **Restart & Clear Output**2. When prompted, click the red **Restart & clear all outputs** button3. Fill in the values for your analysis for each of the variables in the [Input Parameters](input-parameters) section4. Click **Cell** > **Run All** Input Parameters
###Code
g_timestamp = ""
g_dataset_name = "20160510_A549"
g_count_alg_name = "19mer_1mm_py"
g_fastq_counts_dir = '/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/data/interim/20160510_D00611_0278_BHK55CBCXX_A549'
g_fastq_counts_run_prefix = "19mer_1mm_py_20160615223822"
g_collapsed_counts_dir = "/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/data/processed/20160510_A549"
g_collapsed_counts_run_prefix = "20160510_A549_19mer_1mm_py_20160616101309"
g_combined_counts_dir = ""
g_combined_counts_run_prefix = ""
g_plots_dir = ""
g_plots_run_prefix = ""
g_code_location = "/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python"
###Output
_____no_output_____
###Markdown
Matplotlib Display
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
CCBB Library Imports
###Code
import sys
sys.path.append(g_code_location)
###Output
_____no_output_____
###Markdown
Automated Set-Up
###Code
# %load -s describe_var_list /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/utilities/analysis_run_prefixes.py
def describe_var_list(input_var_name_list):
description_list = ["{0}: {1}\n".format(name, eval(name)) for name in input_var_name_list]
return "".join(description_list)
from ccbbucsd.utilities.analysis_run_prefixes import check_or_set, get_run_prefix, get_timestamp
g_timestamp = check_or_set(g_timestamp, get_timestamp())
g_collapsed_counts_dir = check_or_set(g_collapsed_counts_dir, g_fastq_counts_dir)
g_collapsed_counts_run_prefix = check_or_set(g_collapsed_counts_run_prefix, g_fastq_counts_run_prefix)
g_combined_counts_dir = check_or_set(g_combined_counts_dir, g_collapsed_counts_dir)
g_combined_counts_run_prefix = check_or_set(g_combined_counts_run_prefix, g_collapsed_counts_run_prefix)
g_plots_dir = check_or_set(g_plots_dir, g_combined_counts_dir)
g_plots_run_prefix = check_or_set(g_plots_run_prefix,
get_run_prefix(g_dataset_name, g_count_alg_name, g_timestamp))
print(describe_var_list(['g_timestamp','g_collapsed_counts_dir', 'g_collapsed_counts_run_prefix',
'g_combined_counts_dir', 'g_combined_counts_run_prefix', 'g_plots_dir',
'g_plots_run_prefix']))
from ccbbucsd.utilities.files_and_paths import verify_or_make_dir
verify_or_make_dir(g_collapsed_counts_dir)
verify_or_make_dir(g_combined_counts_dir)
verify_or_make_dir(g_plots_dir)
###Output
_____no_output_____
###Markdown
Count File Suffixes
###Code
# %load -s get_counts_file_suffix /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/construct_counter.py
def get_counts_file_suffix():
return "counts.txt"
# %load -s get_collapsed_counts_file_suffix,get_combined_counts_file_suffix /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/count_combination.py
def get_collapsed_counts_file_suffix():
return "collapsed.txt"
def get_combined_counts_file_suffix():
return "counts_combined.txt"
###Output
_____no_output_____
###Markdown
Count Plots Functions
###Code
# %load /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/count_plots.py
# third-party libraries
import matplotlib.pyplot
import numpy
import pandas
# ccbb libraries
from ccbbucsd.utilities.analysis_run_prefixes import strip_run_prefix
from ccbbucsd.utilities.files_and_paths import build_multipart_fp, get_file_name_pieces, get_filepaths_by_prefix_and_suffix
# project-specific libraries
from ccbbucsd.malicrispr.count_files_and_dataframes import get_counts_df
__author__ = "Amanda Birmingham"
__maintainer__ = "Amanda Birmingham"
__email__ = "[email protected]"
__status__ = "prototype"
DEFAULT_PSEUDOCOUNT = 1
def get_boxplot_suffix():
return "boxplots.png"
def make_log2_series(input_series, pseudocount_val):
revised_series = input_series + pseudocount_val
log2_series = revised_series.apply(numpy.log2)
nan_log2_series = log2_series.replace([numpy.inf, -numpy.inf], numpy.nan)
return nan_log2_series.dropna().reset_index(drop=True)
# note that .reset_index(drop=True) is necessary as matplotlib boxplot function (perhaps among others)
# throws an error if the input series doesn't include an item with index 0--which can be the case if
# that first item was NaN and was dropped, and series wasn't reindexed.
def show_and_save_histogram(output_fp, title, count_data):
matplotlib.pyplot.figure(figsize=(20,20))
matplotlib.pyplot.hist(count_data)
matplotlib.pyplot.title(title)
matplotlib.pyplot.xlabel("log2(raw counts)")
matplotlib.pyplot.ylabel("Frequency")
matplotlib.pyplot.savefig(output_fp)
matplotlib.pyplot.show()
def show_and_save_boxplot(output_fp, title, samples_names, samples_data, rotation_val=0):
fig = matplotlib.pyplot.figure(1, figsize=(20,20))
ax = fig.add_subplot(111)
bp = ax.boxplot(samples_data)
ax.set_xticklabels(samples_names, rotation=rotation_val)
ax.set_xlabel("samples")
ax.set_ylabel("log2(raw counts)")
matplotlib.pyplot.title(title)
fig.savefig(output_fp, bbox_inches='tight')
matplotlib.pyplot.show()
def plot_raw_counts(input_dir, input_run_prefix, counts_suffix, output_dir, output_run_prefix, boxplot_suffix):
counts_fps_for_run = get_filepaths_by_prefix_and_suffix(input_dir, input_run_prefix, counts_suffix)
for curr_counts_fp in counts_fps_for_run:
_, curr_sample, _ = get_file_name_pieces(curr_counts_fp)
stripped_sample = strip_run_prefix(curr_sample, input_run_prefix)
count_header, curr_counts_df = get_counts_df(curr_counts_fp, input_run_prefix)
curr_counts_df.rename(columns={count_header:stripped_sample}, inplace=True)
count_header = stripped_sample
log2_series = make_log2_series(curr_counts_df[count_header], DEFAULT_PSEUDOCOUNT)
title = " ".join([input_run_prefix, count_header, "with pseudocount", str(DEFAULT_PSEUDOCOUNT)])
output_fp_prefix = build_multipart_fp(output_dir, [count_header, input_run_prefix])
boxplot_fp = output_fp_prefix + "_" + boxplot_suffix
show_and_save_boxplot(boxplot_fp, title, [count_header], log2_series)
hist_fp = output_fp_prefix + "_" + "hist.png"
show_and_save_histogram(hist_fp, title, log2_series)
def plot_combined_raw_counts(input_dir, input_run_prefix, combined_suffix, output_dir, output_run_prefix, boxplot_suffix):
output_fp = build_multipart_fp(output_dir, [output_run_prefix, boxplot_suffix])
combined_counts_fp = build_multipart_fp(input_dir, [input_run_prefix, combined_suffix])
combined_counts_df = pandas.read_table(combined_counts_fp)
samples_names = combined_counts_df.columns.values[1:] # TODO: remove hardcode
samples_data = []
for curr_name in samples_names:
log2_series = make_log2_series(combined_counts_df[curr_name], DEFAULT_PSEUDOCOUNT)
samples_data.append(log2_series.tolist())
title = " ".join([input_run_prefix, "all samples", "with pseudocount", str(DEFAULT_PSEUDOCOUNT)])
show_and_save_boxplot(output_fp, title, samples_names, samples_data, 90)
###Output
_____no_output_____
###Markdown
Individual fastq Plots
###Code
from ccbbucsd.utilities.files_and_paths import summarize_filenames_for_prefix_and_suffix
print(summarize_filenames_for_prefix_and_suffix(g_fastq_counts_dir, g_fastq_counts_run_prefix, get_counts_file_suffix()))
# this call makes one boxplot per raw fastq
plot_raw_counts(g_fastq_counts_dir, g_fastq_counts_run_prefix, get_counts_file_suffix(), g_plots_dir,
g_plots_run_prefix, get_boxplot_suffix())
###Output
_____no_output_____
###Markdown
Individual Sample Plots
###Code
print(summarize_filenames_for_prefix_and_suffix(g_collapsed_counts_dir, g_collapsed_counts_run_prefix,
get_collapsed_counts_file_suffix()))
plot_raw_counts(g_collapsed_counts_dir, g_collapsed_counts_run_prefix, get_collapsed_counts_file_suffix(),
g_plots_dir, g_plots_run_prefix, get_boxplot_suffix())
###Output
_____no_output_____
###Markdown
Combined Samples Plots
###Code
print(summarize_filenames_for_prefix_and_suffix(g_combined_counts_dir, g_combined_counts_run_prefix,
get_combined_counts_file_suffix()))
plot_combined_raw_counts(g_combined_counts_dir, g_combined_counts_run_prefix, get_combined_counts_file_suffix(),
g_plots_dir, g_plots_run_prefix, get_boxplot_suffix())
###Output
_____no_output_____ |
intrinsic_dim/plots/more/fnn_mnist.ipynb | ###Markdown
2-layer FNN on MNISTThis is MLP (784-200-200-10) on MNIST. Adam algorithm (lr=0.001) with 100 epoches. 100 hidden units Total params: 89,610 Trainable params: 89,610 Non-trainable params: 0 200 hidden units Total params: 199,210 Trainable params: 199,210 Non-trainable params: 0 200 hidden units with 10 intrinsic dim Total params: 2,191,320 Trainable params: 10 Non-trainable params: 2,191,310 200 hidden units with 5000 intrinsic dim Total params: 996,254,210 Trainable params: 5,000 Non-trainable params: 996,249,210
###Code
import os, sys
import numpy as np
from matplotlib.pyplot import *
%matplotlib inline
results_dir = '../results'
class Results():
def __init__(self):
self.train_loss = []
self.train_accuracy = []
self.train_loss = []
self.valid_loss = []
self.run_time = []
def add_entry(self, train_loss, train_accuracy, valid_loss, valid_accuracy, run_time):
self.train_loss.append(train_loss)
self.train_accuracy.append(train_accuracy)
self.train_loss.append(train_loss)
self.valid_loss.append(valid_loss)
self.run_time.append(run_time)
def add_entry_list(self, entry):
self.add_entry(entry[0], entry[1], entry[2], entry[3], entry[4])
def list2np(self):
self.train_loss = []
self.train_accuracy = []
self.train_loss = []
self.valid_loss = []
self.run_time = []
dim = [10, 50, 100, 300, 500, 1000, 2000, 3000, 4000, 5000]
i = 0
# filename list of diary
diary_names = []
for subdir, dirs, files in os.walk(results_dir):
for file in files:
if file == 'diary':
fname = os.path.join(subdir, file)
diary_names.append(fname)
diary_names_ordered = []
for d in dim:
for f in diary_names:
if str(d)+'/' in f:
# print "%d is in" % d + f
diary_names_ordered.append(f)
if '_200dir/' in f:
diary_names_dir = f
if '_dir/' in f:
diary_names_dir_100 = f
# extrinsic update method
with open(diary_names_dir,'r') as ff:
lines0 = ff.readlines()
R_dir = extract_num(lines0)
with open(diary_names_dir_100,'r') as ff:
lines0 = ff.readlines()
R_dir_100 = extract_num(lines0)
print "200 hiddent units:\n" + str(R_dir) + "\n"
print "100 hiddent units:\n" + str(R_dir_100) + "\n"
# intrinsic update method
Rs = []
i = 0
for fname in diary_names_ordered:
with open(fname,'r') as ff:
lines0 = ff.readlines()
R = extract_num(lines0)
print "%d dim:\n"%dim[i] + str(R) + "\n"
i += 1
Rs.append(R)
Rs = np.array(Rs)
def extract_num(lines0):
valid_loss_str = lines0[-5]
valid_accuracy_str = lines0[-6]
train_loss_str = lines0[-8]
train_accuracy_str = lines0[-9]
run_time_str = lines0[-10]
valid_loss = float(valid_loss_str.split( )[-1])
valid_accuracy = float(valid_accuracy_str.split( )[-1])
train_loss = float(train_loss_str.split( )[-1])
train_accuracy = float(train_accuracy_str.split( )[-1])
run_time = float(run_time_str.split( )[-1])
return valid_loss, valid_accuracy, train_loss, train_accuracy, run_time
###Output
_____no_output_____
###Markdown
Performance comparison with Baseline
###Code
N = 10
fig, ax = subplots(1)
ax.plot(dim, Rs[:,0],'b-', label="Testing")
ax.plot(dim, R_dir[0]*np.ones(N),'b-', label="Testing: baseline")
ax.plot(dim, Rs[:,2],'g-', label="Training")
ax.plot(dim, R_dir[2]*np.ones(N),'g-', label="Training: baseline")
ax.scatter(dim, Rs[:,0])
ax.scatter(dim, Rs[:,2])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Cross Entropy Loss')
ax.set_title('Cross Entropy Loss')
ax.legend()
ax.grid()
ax.set_ylim([-0.1,1.1])
fig.set_size_inches(8, 5)
fig, ax = subplots(1)
ax.plot(dim, Rs[:,1],'b-', label="Testing")
ax.plot(dim, R_dir[1]*np.ones(N),'b-', label="Testing: baseline")
ax.plot(dim, Rs[:,3],'g-', label="Training")
ax.plot(dim, R_dir[3]*np.ones(N),'g-', label="Training: baseline")
ax.scatter(dim, Rs[:,1])
ax.scatter(dim, Rs[:,3])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Cross Entropy Accuracy')
ax.set_title('Cross Entropy Accuracy')
ax.legend()
ax.grid()
ax.set_ylim([0.75,1.01])
fig.set_size_inches(8, 5)
fig, ax = subplots(1)
ax.plot(dim, Rs[:,4],'g-', label="Training")
ax.plot(dim, R_dir[4]*np.ones(N),'g-', label="Training: baseline")
ax.scatter(dim, Rs[:,4])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Time (second)')
ax.set_title('Wall Clock Time')
ax.legend()
ax.grid()
# ax.set_ylim([0.75,100.01])
fig.set_size_inches(8, 5)
###Output
_____no_output_____
###Markdown
Performance Per Dim
###Code
NRs = Rs/np.array(dim).reshape(N,1)
print NRs
fig, ax = subplots(1)
ax.plot(dim, NRs[:,0],'b-', label="Testing")
ax.scatter(dim, NRs[:,0])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Cross Entropy Loss per dim')
ax.set_title('Cross Entropy Loss per Dim')
ax.legend()
ax.grid()
fig.set_size_inches(8, 5)
fig, ax = subplots(1)
ax.plot(dim, NRs[:,2],'g-', label="Training")
ax.scatter(dim, NRs[:,2])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Cross Entropy Loss per dim')
ax.set_title('Cross Entropy Loss per Dim')
ax.legend()
ax.grid()
fig.set_size_inches(8, 5)
fig, ax = subplots(1)
ax.plot(dim, NRs[:,1],'b-', label="Testing")
ax.scatter(dim, NRs[:,1])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Cross Entropy Loss per dim')
ax.set_title('Cross Entropy Loss per Dim')
ax.legend()
ax.grid()
fig.set_size_inches(8, 5)
fig, ax = subplots(1)
ax.plot(dim, NRs[:,3],'g-', label="Training")
ax.scatter(dim, NRs[:,3])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Cross Entropy Loss per dim')
ax.set_title('Cross Entropy Loss per Dim')
ax.legend()
ax.grid()
fig.set_size_inches(8, 5)
fig, ax = subplots(1)
ax.plot(dim, NRs[:,4],'g-', label="Training")
ax.scatter(dim, NRs[:,4])
ax.set_xlabel('Intrinsic Dim')
ax.set_ylabel('Time (second)')
ax.set_title('Wall Clock Time')
ax.legend()
ax.grid()
# ax.set_ylim([0.75,100.01])
fig.set_size_inches(8, 5)
###Output
_____no_output_____ |
notebooks/lesson4-collab.ipynb | ###Markdown
Collaborative Filtering using fastai---
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.collab import *
import fastai
fastai.__version__
###Output
_____no_output_____
###Markdown
Load Movielens Data Download Data
###Code
! touch ~/.fastai/data/ml-100k.zip
! curl 'http://files.grouplens.org/datasets/movielens/ml-100k.zip' --output ~/.fastai/data/ml-100k.zip
! unzip ~/.fastai/data/ml-100k.zip -d ~/.fastai/data/
path = Path('/home/aman/.fastai/data/ml-100k')
path.ls()
###Output
_____no_output_____
###Markdown
Read into DataFrame
###Code
ratings = pd.read_csv(path/'u.data', sep='\t', header=None, names=['userID', 'itemID','rating', 'timestamp'])
ratings.head()
movies = pd.read_csv(path/'u.item', sep='|', header=None, encoding='latin-1',names=['itemID', 'title', *[f'col_{i}' for i in range(22)]])
movies.head()
movies_ratings = ratings.merge(movies[['itemID', 'title']])
movies_ratings.head()
###Output
_____no_output_____
###Markdown
Create DataBunch
###Code
data = CollabDataBunch.from_df(movies_ratings, valid_pct=0.1,
user_name='userID', item_name='title', rating_name='rating')
data.show_batch()
ratings_range = [0,5.5]
###Output
_____no_output_____
###Markdown
Train Collaborative Filtering Learner
###Code
learner = collab_learner(data, n_factors=50, y_range=ratings_range, metrics=accuracy_thresh)
learner.model
learner.lr_find()
learner.recorder.plot(skip_end=15)
lr =1e-2
learner.fit_one_cycle(3, lr)
learner.fit_one_cycle(3, lr)
learner.save('dotprod')
###Output
_____no_output_____
###Markdown
Interpretation
###Code
learner = collab_learner(data, n_factors=50, y_range=ratings_range, metrics=accuracy_thresh)
learner.load('dotprod');
learner.model
###Output
_____no_output_____
###Markdown
For Most Rated Movies
###Code
movies_ratings.head()
g = movies_ratings.groupby('title')['rating'].count()
top_movies = g.sort_values(ascending=False)[:1000]
top_movies[:10]
top_movies[-10:]
###Output
_____no_output_____
###Markdown
Movie Bias
###Code
bias = learner.bias(top_movies.index)
bias.shape
mean_ratings = movies_ratings.groupby('title')['rating'].mean()
mean_ratings.head()
movie_bias = [(i,b, mean_ratings[i]) for i,b in zip(top_movies.index, bias)]
movie_bias[:5]
mean_ratings['Star Wars (1977)'], bias[0]
sorted(movie_bias, key=lambda x:x[1], reverse=True)[:10]
sorted(movie_bias, key=lambda x:x[1], reverse=False)[:10]
###Output
_____no_output_____
###Markdown
Movie Weights
###Code
weights = learner.weight(top_movies.index)
weights.shape
(fac1, fac2) = weights.pca(k=2).t()
movie_weigts = [(i, f1, f2, mean_ratings[i]) for i,f1,f2 in zip(top_movies.index, fac1, fac2)]
###Output
_____no_output_____
###Markdown
**Factor 1 representation**
###Code
print(*sorted(movie_weigts, key=lambda x:x[1], reverse=True)[:10], sep='\n')
print(*sorted(movie_weigts, key=lambda x:x[1], reverse=False)[:10], sep='\n')
###Output
('Shadow Conspiracy (1997)', tensor(-1.3907), tensor(0.3918), 2.8636363636363638)
('Beverly Hills Cop III (1994)', tensor(-1.3629), tensor(0.6043), 2.392857142857143)
('Beverly Hillbillies, The (1993)', tensor(-1.3618), tensor(0.2698), 2.25)
('Turbulence (1997)', tensor(-1.3186), tensor(-0.2513), 2.5652173913043477)
('Batman & Robin (1997)', tensor(-1.2267), tensor(0.2347), 2.4516129032258065)
('Bio-Dome (1996)', tensor(-1.2169), tensor(0.7667), 1.903225806451613)
('Batman Forever (1995)', tensor(-1.1698), tensor(0.5102), 2.6666666666666665)
('Net, The (1995)', tensor(-1.1253), tensor(0.0352), 3.0083333333333333)
('D3: The Mighty Ducks (1996)', tensor(-1.1148), tensor(0.0227), 2.5789473684210527)
('Tales from the Hood (1995)', tensor(-1.0973), tensor(0.4494), 2.037037037037037)
###Markdown
**Factor 2 representation**
###Code
print(*sorted(movie_weigts, key=lambda x:x[2], reverse=True)[:10], sep='\n')
print(*sorted(movie_weigts, key=lambda x:x[2], reverse=False)[:10], sep='\n')
###Output
('Braveheart (1995)', tensor(-0.1758), tensor(-1.2507), 4.151515151515151)
("It's a Wonderful Life (1946)", tensor(0.1448), tensor(-1.0530), 4.121212121212121)
('Sleepless in Seattle (1993)', tensor(-0.1769), tensor(-1.0450), 3.539906103286385)
('Miracle on 34th Street (1994)', tensor(0.0406), tensor(-0.9652), 3.722772277227723)
('Amateur (1994)', tensor(0.5791), tensor(-0.9586), 3.1666666666666665)
('American President, The (1995)', tensor(-0.6015), tensor(-0.9311), 3.6280487804878048)
('Dave (1993)', tensor(-0.2710), tensor(-0.9055), 3.65)
('Dirty Dancing (1987)', tensor(-0.6857), tensor(-0.9039), 3.1020408163265305)
('Meet John Doe (1941)', tensor(0.8002), tensor(-0.8891), 3.92)
('Now and Then (1995)', tensor(-0.4483), tensor(-0.8846), 3.4583333333333335)
###Markdown
**PCA Visualization**
###Code
idxs = np.random.choice(len(top_movies), size=50, replace=False)
x = fac1[idxs]
y = fac2[idxs]
movie_titles = top_movies[idxs]
fig, ax = plt.subplots(figsize=(15,15))
ax.scatter(x, y)
for title, x_i, y_i in zip(movie_titles.index, x, y):
ax.text(x_i,y_i,title)
###Output
_____no_output_____
###Markdown
Imports
###Code
from fastai import *
from fastai.collab import *
from fastai.tabular import *
import seaborn as sns
sns.set()
%matplotlib inline
###Output
_____no_output_____
###Markdown
Sample of movies data `collab` models use data in a `DataFrame` of user, items, and ratings.
###Code
user, item, title = 'userId', 'movieId', 'title'
path = untar_data(URLs.ML_SAMPLE)
path
ratings = pd.read_csv(path / 'ratings.csv')
ratings.head()
###Output
_____no_output_____
###Markdown
That's all we need to create and train a model: `CollabDataBunch` assumes the first column is user, the second is movie, and the third is rating.
###Code
data = CollabDataBunch.from_df(ratings, seed=42)
# Since we are using sigmoid to restrict values to be between 0 & 5, sigmoid
# saturates at the lower and upper intervals and may not actually get a prediction
# that is 0 or 5 even though we have a lot of movies that have been rated at 5.
# Therefore, we would subtract small number for minumum and add the same number to
# the maximum. In our case, the minumum was 0.5 and maximum was 5 --> subtract
# 0.5 form min and add 0.5 to the max --> new y_range = [0.5 - 0.05, 5 + 0.5] = [0, 5.5]
y_range = [0, 5.5]
# n_factors is the width of the embedding matrix
learn = collab_learner(data, n_factors=50, y_range=y_range)
learn.fit_one_cycle(3, 5e-3)
###Output
Total time: 00:02
epoch train_loss valid_loss
1 1.585587 0.937921 (00:00)
2 0.836838 0.679799 (00:00)
3 0.661621 0.675431 (00:00)
###Markdown
Movielens 100k Let's try with the full Movielens 100k data dataset, available from http://files.grouplens.org/datasets/movielens/ml-100k.zip
###Code
path = Path('../data/ml-100k/')
path
path.ls()
!head -10 ../data/ml-100k/u.data
ratings = pd.read_csv(path / 'u.data', delimiter='\t', header=None,
names=[user, item, 'rating', 'timestamp'])
ratings.head()
!head -10 ../data/ml-100k/u.item
movies = pd.read_csv(path / 'u.item', delimiter='|', encoding='latin-1', header=None,
names=[item, 'title', 'date', 'N', 'url', *[f'g{i}' for i in range(19)]])
movies.head()
len(ratings)
rating_movie = ratings.merge(movies[[item, title]])
rating_movie.head()
data = CollabDataBunch.from_df(rating_movie, seed=42, pct_val=0.1, item_name=title)
len(data.train_ds), len(data.valid_ds)
data.show_batch()
y_range = [0, 5.5]
learn = collab_learner(data, n_factors=40, y_range=y_range, wd=1e-1)
learn.lr_find()
learn.recorder.plot(skip_end=15)
learn.fit_one_cycle(5, 5e-3)
np.sqrt(0.812)
learn.save('dotprod')
###Output
_____no_output_____
###Markdown
Here's [some benchmarks](https://www.librec.net/release/v1.3/example.html) on the same dataset for the popular Librec system for collaborative filtering. They show best results based on RMSE of 0.91, which corresponds to an MSE of `0.91**2 = 0.83`. Interpretation Setup
###Code
learn.load('dotprod');
learn.model
rating_movie.userId.nunique(), rating_movie.title.nunique()
# Get the top 1000 movies by number of ratings.
g = rating_movie.groupby(title)['rating'].count()
top_movies = g.sort_values(ascending=False).index.values[:1000]
top_movies[:10]
###Output
_____no_output_____
###Markdown
Movie bias
###Code
# In collaborative filtering setting, we use the terms user and item
# Even if the task has not item per se. For example, item here is movie
movie_bias = learn.bias(top_movies, is_item=True)
movie_bias.shape
# Get the average rating per movie and then zip it with the bias
# and the title of the movie
mean_ratings = np.round(rating_movie.groupby(title)['rating'].mean(), 2)
movie_ratings = [(bias, movie, mean_ratings.loc[movie])
for movie, bias in zip(top_movies, movie_bias)]
movie_ratings[:5]
# Sort by bias
sorted(movie_ratings, key=lambda x:x[0])[:15]
sorted(movie_ratings, key=lambda x:x[0], reverse=True)[:15]
###Output
_____no_output_____
###Markdown
Movie weights We'll be looking at the same top 1000 movies used above.
###Code
movie_w = learn.weight(top_movies, is_item=True)
movie_w.shape
###Output
_____no_output_____
###Markdown
The width of the embedding is 40 which is the latent space it tries to learn for each movie.
###Code
# We will use PCA for dimensionally reduction to project each movie
# from space of dimension 40 to 3-D so that it is easier to explore
movie_pca = movie_w.pca(3)
movie_pca.shape
# We're getting the three factors (principal components)
fac0, fac1, fac2 = movie_pca.t()
fac0.shape, fac1.shape, fac2.shape,
###Output
_____no_output_____
###Markdown
Factor 1
###Code
movie_comp = [(fac, movie) for fac, movie in zip(fac0, top_movies)]
movie_comp[:5]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
###Output
_____no_output_____
###Markdown
Factor 2
###Code
movie_comp = [(fac, movie) for fac, movie in zip(fac1, top_movies)]
movie_comp[:5]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
###Output
_____no_output_____
###Markdown
Plot learned weights
###Code
idxs = list(range(50))
X = fac0[idxs]
Y = fac2[idxs]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(top_movies[idxs], X, Y):
plt.text(x,y,i, color=np.random.rand(3)*0.7, fontsize=11)
plt.show()
dl = iter(data.train_dl)
o = dl.__next__()
o[0][0].size()
###Output
_____no_output_____
###Markdown
```pythonclass EmbeddingDotBias(nn.Module): "Base model for callaborative filtering." def __init__(self, n_factors:int, n_users:int, n_items:int, y_range:Tuple[float,float]=None): super().__init__() self.y_range = y_range Each user will have a bias and each movie will also have a bias. (self.u_weight, self.i_weight, self.u_bias, self.i_bias) = \ [embedding(*o) for o in [(n_users, n_factors), (n_items, n_factors), (n_users,1), (n_items,1) ]] def forward(self, users:LongTensor, items:LongTensor) -> Tensor: users and items will tensors that hold the indices that will be used look up their values from the embedding matrics. dot is element-wise product of the embeddings of users and items dot = self.u_weight(users)* self.i_weight(items) Then sum the dot which will be dot product of the user values and item values from embedding matrics We also add the bias for each user and item res = dot.sum(1) + self.u_bias(users).squeeze() + self.i_bias(items).squeeze() if self.y_range is None: return res return torch.sigmoid(res) * (self.y_range[1]-self.y_range[0]) + self.y_range[0]``` ```pythondef collab_learner(data, n_factors:int=None, use_nn:bool=False, metrics=None, emb_szs:Dict[str,int]=None, wd:float=0.01, **kwargs)->Learner: "Create a Learner for collaborative filtering." emb_szs = data.get_emb_szs(ifnone(emb_szs, {})) u, m = data.classes.values() if use_nn: model = EmbeddingNN(emb_szs=emb_szs, **kwargs) else: model = EmbeddingDotBias(n_factors, len(u), len(m), **kwargs) return CollabLearner(data, model, metrics=metrics, wd=wd)```
###Code
u, m, = data.classes.values()
u, len(u)
m, len(m)
rating_movie.userId.nunique(), rating_movie.title.nunique()
###Output
_____no_output_____
###Markdown
Collaborative filtering example `collab` models use data in a `DataFrame` of user, items, and ratings.
###Code
user,item,title = 'userId','movieId','title'
path = untar_data(URLs.ML_SAMPLE)
path
ratings = pd.read_csv(path/'ratings.csv')
ratings.head()
%dipush ratings
%autodip on
###Output
Pushing parameters to DDP namespace: ['ratings']
Auto Execution on DDP group: on, will run cell as %%dip
###Markdown
That's all we need to create and train a model:
###Code
data = CollabDataBunch.from_df(ratings, seed=42)
y_range = [0,5.5]
learn = collab_learner(data, n_factors=50, y_range=y_range)
learn.fit_one_cycle(3, 5e-3)
%autodip off
###Output
Auto Execution on DDP group: Off
###Markdown
Movielens 100k Let's try with the full Movielens 100k data dataset, available from http://files.grouplens.org/datasets/movielens/ml-100k.zip
###Code
path=Config.data_path()/'ml-100k'
ratings = pd.read_csv(path/'u.data', delimiter='\t', header=None,
names=[user,item,'rating','timestamp'])
ratings.head()
movies = pd.read_csv(path/'u.item', delimiter='|', encoding='latin-1', header=None,
names=[item, 'title', 'date', 'N', 'url', *[f'g{i}' for i in range(19)]])
movies.head()
len(ratings)
rating_movie = ratings.merge(movies[[item, title]])
rating_movie.head()
%dipush rating_movie title
%autodip on
data = CollabDataBunch.from_df(rating_movie, seed=42, valid_pct=0.1, item_name=title)
data.show_batch()
y_range = [0,5.5]
learn = collab_learner(data, n_factors=40, y_range=y_range, wd=1e-1)
learn.lr_find()
learn.recorder.plot(skip_end=15)
learn.fit_one_cycle(5, 5e-3)
learn.save('dotprod')
###Output
%%dip : Running cell in remote DDP namespace (GPUs: [0, 1, 2]).
###Markdown
Here's [some benchmarks](https://www.librec.net/release/v1.3/example.html) on the same dataset for the popular Librec system for collaborative filtering. They show best results based on RMSE of 0.91, which corresponds to an MSE of `0.91**2 = 0.83`. Interpretation Setup
###Code
learn.load('dotprod');
learn.model
g = rating_movie.groupby(title)['rating'].count()
top_movies = g.sort_values(ascending=False).index.values[:1000]
top_movies[:10]
###Output
%%dip : Running cell in remote DDP namespace (GPUs: [0, 1, 2]).
###Markdown
Movie bias
###Code
movie_bias = learn.bias(top_movies, is_item=True)
movie_bias.shape
mean_ratings = rating_movie.groupby(title)['rating'].mean()
movie_ratings = [(b, i, mean_ratings.loc[i]) for i,b in zip(top_movies,movie_bias)]
item0 = lambda o:o[0]
sorted(movie_ratings, key=item0)[:15]
sorted(movie_ratings, key=lambda o: o[0], reverse=True)[:15]
###Output
%%dip : Running cell in remote DDP namespace (GPUs: [0, 1, 2]).
###Markdown
Movie weights
###Code
movie_w = learn.weight(top_movies, is_item=True)
movie_w.shape
movie_pca = movie_w.pca(3)
movie_pca.shape
fac0,fac1,fac2 = movie_pca.t()
movie_comp = [(f, i) for f,i in zip(fac0, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
movie_comp = [(f, i) for f,i in zip(fac1, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
idxs = np.random.choice(len(top_movies), 50, replace=False)
idxs = list(range(50))
X = fac0[idxs]
Y = fac2[idxs]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(top_movies[idxs], X, Y):
plt.text(x,y,i, color=np.random.rand(3)*0.7, fontsize=11)
plt.show()
###Output
%%dip : Running cell in remote DDP namespace (GPUs: [0, 1, 2]).
|
misc/projections_to_the_line_and_observed_distributions.ipynb | ###Markdown
Short Bursts DistributionsWe look at short bursts on PA and AR senate.
###Code
import matplotlib.pyplot as plt
from gerrychain import (GeographicPartition, Partition, Graph, MarkovChain,
proposals, updaters, constraints, accept, Election)
from gerrychain.proposals import recom, propose_random_flip
from gerrychain.tree import recursive_tree_part
from gerrychain.metrics import mean_median, efficiency_gap, polsby_popper, partisan_gini
from functools import (partial, reduce)
import pandas
import geopandas as gp
import numpy as np
import networkx as nx
import pickle
import seaborn as sns
import pprint
import operator
import scipy
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale, normalize
import random
from nltk.util import bigrams
from nltk.probability import FreqDist
from gingleator import Gingleator
from numpy.random import randn
from scipy.stats import norm, probplot
## This function takes a name of a shapefile and returns a tuple of the graph
## and its associated dataframe
def build_graph(filename):
print("Pulling in Graph from Shapefile: " + filename)
graph = Graph.from_file(filename)
df = gp.read_file(filename)
return(graph, df)
# graph, df = build_graph("AR_shape/AR.shp")
# pickle.dump(graph, open("graph_AR.p", "wb"))
# pickle.dump(df, open("df_AR.p", "wb"))
## Set up PA enacted
graph_PA = pickle.load(open("PA_graph.p", "rb"))
df_PA = pickle.load(open("PA_df.p", "rb"))
PA_updaters = {"population": updaters.Tally("TOT_POP", alias="population"),
"bvap": updaters.Tally("BLACK_POP", alias="bvap"),
"vap": updaters.Tally("VAP", alias="vap"),
"bvap_prec": lambda part: {k: part["bvap"][k] / part["population"][k] for k in part["bvap"]}}
PA_enacted_senate = GeographicPartition(graph_PA, assignment="SSD",
updaters=PA_updaters)
total_population_PA = sum(df_PA.TOT_POP.values)
ideal_population_PA = total_population_PA / 50
seed_part_senate = recursive_tree_part(graph_PA, range(50), pop_col="TOT_POP",
pop_target=ideal_population_PA,
epsilon=0.01, node_repeats=1)
PA_seed_seante = GeographicPartition(graph_PA, assignment=seed_part_senate,updaters=PA_updaters)
## Set up AR
graph_AR = pickle.load(open("graph_AR.p", "rb"))
df_AR = pickle.load(open("df_AR.p", "rb"))
AR_updaters = {"population": updaters.Tally("TOTPOP", alias="population"),
"bvap": updaters.Tally("BVAP", alias="bvap"),
"vap": updaters.Tally("VAP", alias="vap"),
"bvap_prec": lambda part: {k: part["bvap"][k] / part["vap"][k]
for k in part["bvap"]}}
AR_enacted_senate = GeographicPartition(graph_AR, assignment="SSD", updaters=AR_updaters)
AR_enacted_house = GeographicPartition(graph_AR, assignment="SHD", updaters=AR_updaters)
total_population_AR = sum(df_AR.TOTPOP.values)
ideal_population_AR = total_population_AR / 35
senate_seed = recursive_tree_part(graph_AR, range(35), pop_col="TOTPOP",
pop_target=ideal_population_AR,
epsilon=0.01, node_repeats=1)
AR_seed_senate = GeographicPartition(graph_AR, assignment=senate_seed,updaters=AR_updaters)
house_seed = recursive_tree_part(graph_AR, range(100),
pop_col="TOTPOP",
pop_target=total_population_AR / 100,
epsilon=0.05, node_repeats=1)
AR_seed_house = GeographicPartition(graph_AR, assignment=house_seed,
updaters=AR_updaters)
H_enact = Gingleator.num_opportunity_dists(AR_enacted_house, "bvap_prec", 0.4)
H_seed = Gingleator.num_opportunity_dists(AR_seed_house, "bvap_prec", 0.4)
Gingleator.num_opportunity_dists(AR_seed_senate, "bvap_prec", 0.4)
Gingleator.num_opportunity_dists(AR_enacted_senate, "bvap_prec", 0.4)
###Output
_____no_output_____
###Markdown
Reprojections onto the line
###Code
def transition_frequencies(observations):
observations = observations.astype(int)
dim = observations.max()
seen_bigrams = []
for row in observations:
seen_bigrams.extend(bigrams(row))
fdist = FreqDist(seen_bigrams)
probs = np.zeros((dim, dim))
for k, v in fdist.items():
probs[k[0]-1][k[1]-1] = v
probs = normalize(probs, norm="l1")
return probs
def rand_walk_graph(transition_frequencies):
G = nx.from_numpy_array(transition_frequencies, create_using=nx.DiGraph)
mapping = {n: n+1 for n in G.nodes}
G = nx.relabel_nodes(G, mapping)
return G
def edge_weights(G, prec=None):
if not prec:
return dict([((u,v,), d['weight']) for u,v,d in G.edges(data=True)])
else:
return dict([((u,v,), round(d['weight'],prec)) for u,v,d in G.edges(data=True)])
PA_gingles = Gingleator(PA_seed_seante, pop_col="TOT_POP", minority_prec_col="bvap_prec",
epsilon=0.1)
AR_gingles = Gingleator(AR_seed_senate, pop_col="TOTPOP", minority_prec_col="bvap_prec",
epsilon=0.1)
###Output
_____no_output_____
###Markdown
PA random walk graph
###Code
_, PA_observations = PA_gingles.short_burst_run(num_bursts=200, num_steps=25)
PA_trans = transition_frequencies(PA_observations)
PA_rand_walk = rand_walk_graph(PA_trans)
edge_weights(PA_rand_walk)
###Output
_____no_output_____
###Markdown
AR random walk graph
###Code
_, AR_observations = AR_gingles.short_burst_run(num_bursts=200, num_steps=25)
AR_trans = transition_frequencies(AR_observations)
AR_rand_walk = rand_walk_graph(AR_trans)
edge_weights(AR_rand_walk)
###Output
_____no_output_____
###Markdown
Distribution of Observations
###Code
def stationary_distribution(graph, nodes=None):
probs = edge_weights(graph)
if not nodes:
observed_nodes = reduce(lambda s, k: s | set(k), probs.keys(), set())
observed_nodes.remove(min(observed_nodes))
else: observed_nodes = nodes
stationary = reduce(lambda pis, i: pis + [pis[-1]*probs[i-1, i] / probs[i, i-1]], observed_nodes, [1])
stationary = normalize([stationary], norm="l1")
return stationary[0]
###Output
_____no_output_____
###Markdown
Distribution of Observations of various methods on AR state houseWe look at the distribution of times we see plans with some number of opportunity districts when we use an unbiased run, the short burst method to maximized and to minimize, and a tilted method with p=0.25 of accepting a worse plan. AR house with just count as score and 5000 iterations.Bursts are 25 steps each
###Code
AR_house_gingles = Gingleator(AR_seed_house, pop_col="TOTPOP", minority_prec_col="bvap_prec",
epsilon=0.1)
_, AR_observations_hub = AR_house_gingles.short_burst_run(num_bursts=1,
num_steps=5000)
_, AR_observations_hsb_max = AR_house_gingles.short_burst_run(num_bursts=200, num_steps=25)
_, AR_observations_hsb_min = AR_house_gingles.short_burst_run(num_bursts=200, num_steps=25,
maximize=False)
_, AR_observations_htilt = AR_house_gingles.biased_run(num_iters=5000)
_, AR_observations_htilt_8 = AR_house_gingles.biased_run(num_iters=5000, p=0.125)
_, AR_observations_htilt_16 = AR_house_gingles.biased_run(num_iters=5000, p=0.0625)
_, AR_observations_hsbtilt = AR_house_gingles.biased_short_burst_run(num_bursts=200,
num_steps=25)
_, AR_observations_hsbtilt_8 = AR_house_gingles.biased_short_burst_run(num_bursts=200,
num_steps=25, p=0.125)
_, AR_observations_hsb_max_5 = AR_house_gingles.short_burst_run(num_bursts=1000, num_steps=5)
_, AR_observations_hsb_max_10 = AR_house_gingles.short_burst_run(num_bursts=500, num_steps=10)
_, AR_observations_hsb_max_50 = AR_house_gingles.short_burst_run(num_bursts=100, num_steps=50)
AR_observations_hsb_tails = np.concatenate((AR_observations_hsb_max, AR_observations_hsb_min))
AR_trans_house = transition_frequencies(AR_observations_hsb_tails)
AR_house_rwgraph = rand_walk_graph(AR_trans_house)
edge_weights(AR_house_rwgraph)
AR_house_stat = stationary_distribution(AR_house_rwgraph)
AR_house_stat
AR_house_scale_stat = np.random.choice(range(6,16), 5000, p=AR_house_stat)
plt.figure(figsize=(8,6))
plt.title("AR State House (100 seats)")
plt.xlabel("Number of Opportunity Districts")
plt.ylabel("Frequency")
sns.distplot(AR_observations_hub.flatten(), kde=False, label="Unbiased", bins=30)
# sns.distplot(AR_observations_hsb1.flatten(), kde=False, label="Short Bursts", color="purple")
# sns.distplot(AR_observations_hsb_min.flatten(), kde=False, label="Short Bursts Min", color="cyan")
sns.distplot(AR_house_scale_stat, kde=False, label="RW Stationary", color="g", bins=30)
plt.axvline(x=H_enact, color="k", linestyle="--", label="enacted")
plt.axvline(x=H_seed, color="grey", linestyle="--", label="seed")
plt.legend()
plt.show()
# plt.savefig("plots/AR_state_house_unbiased_stationary_distribution.png")
plt.figure(figsize=(8,6))
plt.title("AR State House (100 seats)")
plt.xlabel("Number of Opportunity Districts")
plt.ylabel("Frequency")
sns.distplot(AR_observations_hub.flatten(), kde=False, label="Unbiased")
# sns.distplot(AR_observations_htilt, kde=False, label="Tilted Run (p=0.25)", color="g")
sns.distplot(AR_observations_hsb_max.flatten(), kde=False, label="Short Bursts Max", color="purple")
sns.distplot(AR_observations_hsb_min.flatten(), kde=False, label="Short Bursts Min", color="cyan")
plt.axvline(x=H_enact, color="k", linestyle="--", label="enacted")
plt.axvline(x=H_seed, color="grey", linestyle="--", label="seed")
plt.legend()
plt.show()
# plt.savefig("plots/AR_state_house_distribution_of_short_bursts.png")
plt.figure(figsize=(8,6))
plt.title("AR State House (100 seats)")
plt.xlabel("Number of Opportunity Districts")
plt.ylabel("Frequency")
sns.distplot(AR_observations_hub.flatten(), kde=False, label="Unbiased", color="green")
sns.distplot(AR_observations_htilt, kde=False, label="Tilted Run (p=0.25)", color="cyan")
sns.distplot(AR_observations_htilt_8.flatten(), kde=False, label="Tilted Run (p=0.125)")
# sns.distplot(AR_observations_htilt_16.flatten(), kde=False, label="Tilted Run (p=0.0625)",
# color="purple")
sns.distplot(AR_observations_hsb_max.flatten(), kde=False, label="Short Bursts", color="purple")
plt.axvline(x=H_enact, color="k", linestyle="--", label="enacted")
plt.axvline(x=H_seed, color="grey", linestyle="--", label="seed")
plt.legend()
plt.show()
# plt.savefig("plots/AR_state_house_short_bursts_vs_tilted_run.png")
plt.figure(figsize=(8,6))
plt.title("AR State House (100 seats)")
plt.xlabel("Number of Opportunity Districts")
plt.ylabel("Frequency")
sns.distplot(AR_observations_hub.flatten(), kde=False, label="Unbiased", color="green")
sns.distplot(AR_observations_htilt, kde=False, label="Tilted Run (p=0.25)", color="cyan")
sns.distplot(AR_observations_htilt_8.flatten(), kde=False, label="Tilted Run (p=0.125)")
sns.distplot(AR_observations_htilt_16.flatten(), kde=False, label="Tilted Run (p=0.0625)",
color="purple")
# sns.distplot(AR_observations_hsb_max.flatten(), kde=False, label="Short Bursts", color="purple")
plt.axvline(x=H_enact, color="k", linestyle="--", label="enacted")
plt.axvline(x=H_seed, color="grey", linestyle="--", label="seed")
plt.legend()
plt.show()
# plt.savefig("plots/AR_state_house_tilted_runs.png")
plt.figure(figsize=(8,6))
plt.title("AR State House (100 seats)")
plt.xlabel("Number of Opportunity Districts")
plt.ylabel("Frequency")
sns.distplot(AR_observations_hub.flatten(), kde=False, label="Unbiased", color="green",
bins=50)
sns.distplot(AR_observations_hsb_max.flatten(), kde=False, label="Short Bursts Max",
color="cyan", bins=50)
sns.distplot(AR_observations_hsbtilt.flatten(), kde=False,
label="Tilted Short Bursts (p=0.25)", bins=50)
sns.distplot(AR_observations_hsbtilt_8.flatten(), kde=False,
label="Tilted Short Bursts (p=0.125)", color="purple", bins=50)
plt.axvline(x=H_enact, color="k", linestyle="--", label="enacted")
plt.axvline(x=H_seed, color="grey", linestyle="--", label="seed")
plt.legend()
plt.show()
# plt.savefig("plots/AR_state_house_distribuition_of_tilted_short_bursts_runs.png")
plt.figure(figsize=(8,6))
plt.title("AR State House (100 seats)")
plt.xlabel("Number of Opportunity Districts")
plt.ylabel("Frequency")
# sns.distplot(AR_observations_hub.flatten(), kde=False, label="Unbiased", color="green",
# bins=50)
sns.distplot(AR_observations_hsb_max_5.flatten(), kde=False,
label="len 5", bins=50)
sns.distplot(AR_observations_hsb_max_10.flatten(), kde=False, label="len 10",
bins=50, color="green")
sns.distplot(AR_observations_hsb_max.flatten(), kde=False, label="len 25",
color="cyan", bins=50)
sns.distplot(AR_observations_hsb_max_50.flatten(), kde=False,
label="len 50", color="purple", bins=50)
plt.axvline(x=H_enact, color="k", linestyle="--", label="enacted")
plt.axvline(x=H_seed, color="grey", linestyle="--", label="seed")
plt.legend()
plt.show()
plt.figure(figsize=(8,10))
plt.title("AR State House: Short Bursts Walks (200, 25)")
plt.xlim(7, 17)
plt.xlabel("Number of opportunity districts")
plt.ylabel("Steps")
for i in range(200):
plt.plot(AR_observations_hsb_max[i], range(25*i, 25*(i+1)))
plt.axvline(x=H_enact, color="k", linestyle="--", label="enacted")
plt.axvline(x=H_seed, color="grey", linestyle="--", label="seed")
plt.legend()
plt.show()
# plt.savefig("plots/AR_state_house_short_burst_over_time.png")
plt.figure(figsize=(8,10))
plt.title("AR State House: Tilted Runs")
plt.xlim(4, 19)
plt.xlabel("Number of opportunity districts")
plt.ylabel("Steps")
plt.plot(AR_observations_hub.flatten(), range(5000), label="Unbiased")
plt.plot(AR_observations_htilt, range(5000), label="Tilted p=0.25")
plt.plot(AR_observations_htilt_8, range(5000), label="Tilted p=0.125")
plt.plot(AR_observations_htilt_16, range(5000), label="Tilted p=0.0625")
plt.axvline(x=H_enact, color="k", linestyle="--", label="enacted")
plt.axvline(x=H_seed, color="grey", linestyle="--", label="seed")
plt.legend()
plt.show()
# plt.savefig("plots/AR_state_house_tilted_runs_over_time.png")
plt.figure(figsize=(8,10))
plt.title("AR State House: Tilted Short Burst Runs")
plt.xlim(4, 18)
plt.xlabel("Number of opportunity districts")
plt.ylabel("Steps")
plt.plot(AR_observations_hub.flatten(), range(5000), label="Unbiased")
plt.plot(AR_observations_hsb_max.flatten(), range(5000), label="Short Burst Max")
plt.plot(AR_observations_hsbtilt.flatten(), range(5000), label="Tilted Short Burst (p=0.25)")
plt.plot(AR_observations_hsbtilt_8.flatten(), range(5000), label="Tilted Short Burst (p=0.125)")
plt.axvline(x=H_enact, color="k", linestyle="--", label="enacted")
plt.axvline(x=H_seed, color="grey", linestyle="--", label="seed")
plt.legend()
plt.show()
# plt.savefig("plots/AR_state_house_tilted_short_burst_runs_over_time.png")
plt.figure(figsize=(8,10))
plt.title("AR State House: Short Burst Runs")
plt.xlim(4, 17)
plt.xlabel("Number of opportunity districts")
plt.ylabel("Steps")
plt.plot(AR_observations_hub.flatten(), range(5000), label="Unbiased")
plt.plot(AR_observations_hsb_max.flatten(), range(5000), label="Short Burst Max")
plt.plot(AR_observations_hsb_min.flatten(), range(5000), label="Short Burst Min")
plt.axvline(x=H_enact, color="k", linestyle="--", label="enacted")
plt.axvline(x=H_seed, color="grey", linestyle="--", label="seed")
plt.legend()
plt.show()
plt.figure(figsize=(8,10))
plt.title("AR State House: Short Burst Runs")
plt.xlim(4, 17)
plt.xlabel("Number of opportunity districts")
plt.ylabel("Steps")
plt.plot(AR_observations_hsb_max_5.flatten(), range(5000), label="len 5")
plt.plot(AR_observations_hsb_max_10.flatten(), range(5000), label="len 10")
plt.plot(AR_observations_hsb_max.flatten(), range(5000), label="len 25")
plt.plot(AR_observations_hsb_max_50.flatten(), range(5000), label="len 50")
plt.axvline(x=H_enact, color="k", linestyle="--", label="enacted")
plt.axvline(x=H_seed, color="grey", linestyle="--", label="seed")
plt.legend()
plt.show()
plt.figure(figsize=(8,6))
plt.title("AR State House")
plt.hist([AR_observations_hub.flatten(), AR_observations_hsb.flatten(),
AR_observations_hsb_min],
label=["Unbiased","Short Bursts Max","Short Bursts Min" ,"Stationary RW"])
plt.legend()
plt.show()
_, PA_unbiased_run = PA_gingles.short_burst_run(num_bursts=1, num_steps=5000)
# _, PA_burst_run = PA_gingles.short_burst_run(num_bursts=100, num_steps=10)
stationary = stationary_distribution(PA_rand_walk)
stat = np.random.choice([3,4,5], 5000, p=stationary)
mu, std = norm.fit(PA_unbiased_run.flatten())
plt.figure(figsize=(10,8))
plt.title("Distributions on PA")
plt.hist([PA_unbiased_run.flatten(), PA_observations.flatten(),stat],
label=["Unbiased","Short Burst","Random Walk"])
p = norm.pdf(x, mu, std)
plt.plot(x, p*5000, 'k', linewidth=2)
plt.legend()
plt.show()
_, AR_unbiased_run = AR_gingles.short_burst_run(num_bursts=1, num_steps=5000)
AR_stationary = stationary_distribution(AR_rand_walk)
AR_stat = np.random.choice([1,2,3,4,5], 5000, p=AR_stationary)
mu, std = norm.fit(AR_unbiased_run.flatten())
plt.figure(figsize=(10,8))
plt.title("Distributions on AR")
plt.hist([AR_unbiased_run.flatten(), AR_observations.flatten(), AR_stat],
label=["Unbiased","Short Burst","Random Walk"])
p = norm.pdf(x, mu, std)
plt.plot(x, p*5000, 'k', linewidth=2)
plt.legend()
plt.show()
plt.figure(figsize=(10,8))
plt.title("Distributions on PA")
sns.distplot(PA_unbiased_run.flatten(), kde=False, label="Unbiased")
sns.distplot(PA_observations.flatten(), kde=False, label="Short Burst")
sns.distplot(stat, kde=False, label="Random Walk")
plt.legend()
plt.show()
plt.figure(figsize=(10,8))
plt.title("Distributions on AR")
sns.distplot(AR_unbiased_run.flatten(), kde=False, label="Unbiased Run")
sns.distplot(AR_observations.flatten(), kde=False, label="Short Burst")
sns.distplot(AR_stat, kde=False, label="Random Walk")
plt.legend()
plt.show()
plt.figure()
probplot(PA_unbiased_run.flatten(), plot=plt)
plt.show()
mu, std = norm.fit(PA_unbiased_run.flatten())
plt.hist(PA_unbiased_run.flatten(), bins=3, density=True, alpha=0.6, color='g')
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, std)
plt.plot(x, p, 'k', linewidth=2)
title = "Fit results: mu = %.2f, std = %.2f" % (mu, std)
plt.title(title)
plt.show()
PA_observations[100]
dist_precs = enacted_senate["bvap_prec"].values()
sum(list(map(lambda v: v >= 0.4, dist_precs)))
max(i for i in dist_precs if i < 0.4)
###Output
_____no_output_____ |
05 - Cross-validation.ipynb | ###Markdown
Cross-Validation----------------------------------------
###Code
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
from sklearn.cross_validation import cross_val_score
from sklearn.svm import LinearSVC
cross_val_score(LinearSVC(), X, y, cv=5)
cross_val_score(LinearSVC(), X, y, cv=5, scoring="f1_macro")
###Output
_____no_output_____
###Markdown
Let's go to a binary task for a moment
###Code
y % 2
cross_val_score(LinearSVC(), X, y % 2)
cross_val_score(LinearSVC(), X, y % 2, scoring="average_precision")
cross_val_score(LinearSVC(), X, y % 2, scoring="roc_auc")
from sklearn.metrics.scorer import SCORERS
print(SCORERS.keys())
###Output
_____no_output_____
###Markdown
Implementing your own scoring metric:
###Code
def my_accuracy_scoring(est, X, y):
return np.mean(est.predict(X) == y)
cross_val_score(LinearSVC(), X, y, scoring=my_accuracy_scoring)
def my_super_scoring(est, X, y):
return np.mean(est.predict(X) == y) - np.mean(est.coef_ != 0)
from sklearn.grid_search import GridSearchCV
y = iris.target
grid = GridSearchCV(LinearSVC(C=.01, dual=False),
param_grid={'penalty' : ['l1', 'l2']},
scoring=my_super_scoring)
grid.fit(X, y)
print(grid.best_params_)
###Output
_____no_output_____
###Markdown
There are other ways to do cross-valiation
###Code
from sklearn.cross_validation import ShuffleSplit
shuffle_split = ShuffleSplit(len(X), 10, test_size=.4)
cross_val_score(LinearSVC(), X, y, cv=shuffle_split)
from sklearn.cross_validation import StratifiedKFold, KFold, ShuffleSplit
def plot_cv(cv, n_samples):
masks = []
for train, test in cv:
mask = np.zeros(n_samples, dtype=bool)
mask[test] = 1
masks.append(mask)
plt.matshow(masks)
plot_cv(StratifiedKFold(y, n_folds=5), len(y))
plot_cv(KFold(len(iris.target), n_folds=5), len(iris.target))
plot_cv(ShuffleSplit(len(iris.target), n_iter=20, test_size=.2),
len(iris.target))
###Output
_____no_output_____
###Markdown
ExercisesUse KFold cross validation and StratifiedKFold cross validation (3 or 5 folds) for LinearSVC on the iris dataset.Why are the results so different? How could you get more similar results?
###Code
# %load solutions/cross_validation_iris.py
###Output
_____no_output_____ |
notebooks/8_pytorch.ipynb | ###Markdown
IntroductionThis notebook predicts the `beer_style` using a neural network on the PyTorchframework. It is a modification of the 5_pytorch.ipynb notebook. After 20epochs, there seems to be still some room for improvement.The same model is trained again for 60 more epochs. SummaryThe increase of neurons has **not** improved the model performance. The[classification report](Classification-report) shows that the validationaccuracy increased to as high as 31.2%, and the test accuracy remains at 32%.
###Code
artefact_prefix = '8_pytorch'
target = 'beer_style'
%load_ext autoreload
%autoreload 2
from dotenv import find_dotenv
from datetime import datetime
import pandas as pd
from pathlib import Path
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from category_encoders.binary import BinaryEncoder
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, LabelEncoder, OneHotEncoder
from joblib import dump, load
from src.data.sets import merge_categories
from src.data.sets import save_sets
from src.data.sets import load_sets
from src.data.sets import split_sets_random
from src.data.sets import test_class_exclusion
from src.models.performance import convert_cr_to_dataframe
from src.models.pytorch import PytorchClassification_8
from src.models.pytorch import get_device
from src.models.pytorch import train_classification
from src.models.pytorch import test_classification
from src.models.pytorch import PytorchDataset
from src.models.pipes import create_preprocessing_pipe
from src.visualization.visualize import plot_confusion_matrix
###Output
_____no_output_____
###Markdown
Set up directories
###Code
project_dir = Path(find_dotenv()).parent
data_dir = project_dir / 'data'
raw_data_dir = data_dir / 'raw'
interim_data_dir = data_dir / 'interim'
processed_data_dir = data_dir / 'processed'
reports_dir = project_dir / 'reports'
models_dir = project_dir / 'models'
###Output
_____no_output_____
###Markdown
Load data
###Code
X_train, X_test, X_val, y_train, y_test, y_val = load_sets()
###Output
_____no_output_____
###Markdown
Preprocess data1. The `brewery_name` is a feature with a very high cardinality, ~5700. One hot encoding is not feasible as it will introduce 5700 very sparse columns. Another option is to use binary encoding, which would result in 14 new columns.1. Standard scaling is used to ensure that the binary columns ([0, 1])and thereview columns ([1, 5]) are on the same scale.
###Code
pipe = Pipeline([
('bin_encoder', BinaryEncoder(cols=['brewery_name'])),
('scaler', StandardScaler())
])
X_train_trans = pipe.fit_transform(X_train)
X_val_trans = pipe.transform(X_val)
X_test_trans = pipe.transform(X_test)
X_train_trans.shape
n_features = X_train_trans.shape[1]
n_features
n_classes = y_train.nunique()
n_classes
###Output
_____no_output_____
###Markdown
EncodingPyTorch accepts only numerical labels.
###Code
le = LabelEncoder()
y_train_trans = le.fit_transform(y_train.to_frame())
y_val_trans = le.fit_transform(y_val.to_frame())
y_test_trans = le.transform(y_test.to_frame())
y_test_trans
###Output
_____no_output_____
###Markdown
Convert to Pytorch tensors
###Code
device = get_device()
device
train_dataset = PytorchDataset(X=X_train_trans, y=y_train_trans)
val_dataset = PytorchDataset(X=X_val_trans, y=y_val_trans)
test_dataset = PytorchDataset(X=X_test_trans, y=y_test_trans)
###Output
_____no_output_____
###Markdown
Classification model
###Code
model = PytorchClassification_8(n_features=n_features, n_classes=n_classes)
model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Train the model
###Code
N_EPOCHS = 60
BATCH_SIZE = 4096
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.9)
start_time = datetime.now()
print(f'Started: {start_time}')
for epoch in range(N_EPOCHS):
train_loss, train_acc = train_classification(train_dataset,
model=model,
criterion=criterion,
optimizer=optimizer,
batch_size=BATCH_SIZE,
device=device,
scheduler=scheduler)
valid_loss, valid_acc = test_classification(val_dataset,
model=model,
criterion=criterion,
batch_size=BATCH_SIZE,
device=device)
print(f'Epoch: {epoch}')
print(f'\t(train)\tLoss: {train_loss:.4f}\t|\tAcc: {train_acc * 100:.1f}%')
print(f'\t(valid)\tLoss: {valid_loss:.4f}\t|\tAcc: {valid_acc * 100:.1f}%')
end_time = datetime.now()
runtime = end_time - start_time
print(f'Ended: {end_time}')
print(f'Runtime: {runtime}')
N_EPOCHS = 20
BATCH_SIZE = 4096
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.9)
start_time = datetime.now()
print(f'Started: {start_time}')
for epoch in range(N_EPOCHS):
train_loss, train_acc = train_classification(train_dataset,
model=model,
criterion=criterion,
optimizer=optimizer,
batch_size=BATCH_SIZE,
device=device,
scheduler=scheduler)
valid_loss, valid_acc = test_classification(val_dataset,
model=model,
criterion=criterion,
batch_size=BATCH_SIZE,
device=device)
print(f'Epoch: {epoch}')
print(f'\t(train)\tLoss: {train_loss:.4f}\t|\tAcc: {train_acc * 100:.1f}%')
print(f'\t(valid)\tLoss: {valid_loss:.4f}\t|\tAcc: {valid_acc * 100:.1f}%')
end_time = datetime.now()
runtime = end_time - start_time
print(f'Ended: {end_time}')
print(f'Runtime: {runtime}')
###Output
Started: 2021-03-14 14:45:36.016408
Epoch: 0
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 1
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 2
(train) Loss: 0.0006 | Acc: 28.5%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 3
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 4
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 5
(train) Loss: 0.0006 | Acc: 28.5%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 6
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 7
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 8
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 9
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 10
(train) Loss: 0.0006 | Acc: 28.5%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 11
(train) Loss: 0.0006 | Acc: 28.5%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 12
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 13
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 14
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 15
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 16
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 17
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 18
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Epoch: 19
(train) Loss: 0.0006 | Acc: 28.4%
(valid) Loss: 0.0006 | Acc: 30.0%
Ended: 2021-03-14 14:49:46.324103
Runtime: 0:04:10.307695
###Markdown
Prediction
###Code
# Use the CPU version if the GPU runs out of memory.
# preds = model(test_dataset.X_tensor.to(device)).argmax(1)
model.to('cpu')
preds = model(test_dataset.X_tensor).argmax(1)
preds
model.to(device)
###Output
_____no_output_____
###Markdown
Evaluation Classification report
###Code
report = classification_report(y_test, le.inverse_transform(preds.cpu()))
print(report)
###Output
C:\Users\Roger\.conda\envs\adsi_ass_2\lib\site-packages\sklearn\metrics\_classification.py:1272: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
###Markdown
Save objects for production Save model
###Code
path = models_dir / f'{artefact_prefix}_model'
torch.save(model, path.with_suffix('.torch'))
###Output
_____no_output_____
###Markdown
Create pipe objectThis is for transforming the input prior to prediction.
###Code
X = pd.concat([X_train, X_val, X_test])
prod_pipe = create_preprocessing_pipe(X)
path = models_dir / f'{artefact_prefix}_pipe'
dump(prod_pipe, path.with_suffix('.sav'))
###Output
_____no_output_____
###Markdown
Save `LabelEncoder`This is required to get back the name of the name of the `beer_style`.
###Code
path = models_dir / f'{artefact_prefix}_label_encoder'
dump(le, path.with_suffix('.sav'))
###Output
_____no_output_____ |
ai-platform-unified/notebooks/unofficial/sdk/sdk_automl_image_object_detection_batch.ipynb | ###Markdown
Vertex SDK: AutoML training image object detection model for batch prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex SDK to create image object detection models and do batch prediction using Google Cloud's [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users). DatasetThe dataset used for this tutorial is the Salads category of the [OpenImages dataset](https://www.tensorflow.org/datasets/catalog/open_images_v4) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese. ObjectiveIn this tutorial, you create an AutoML image object detection model from a Python script, and then do a batch prediction using the Vertex SDK. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Google Cloud Console.The steps performed include:- Create a Vertex `Dataset` resource.- Train the model.- View the model evaluation.- Make a batch prediction.There is one key difference between using batch prediction and using online prediction:* Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.* Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. CostsThis tutorial uses billable components of Google Cloud (GCP):* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. InstallationInstall the latest version of Vertex SDK.
###Code
import sys
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = '--user'
else:
USER_FLAG = ''
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the Vertex SDK and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
###Code
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
###Code
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/ai-platform-unified/docs/general/locations)
###Code
REGION = 'us-central1' #@param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" #@param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Initialize Vertex SDKInitialize the Vertex SDK for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own AutoML image object detection model. Create a Dataset ResourceFirst, you create an image Dataset resource for the Salads dataset. Data preparationThe Vertex `Dataset` resource for images has some requirements for your data:- Images must be stored in a Cloud Storage bucket.- Each image file must be in an image format (PNG, JPEG, BMP, ...).- There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image.- The index file must be either CSV or JSONL. CSVFor image object detection, the CSV index file has the requirements:- No heading.- First column is the Cloud Storage path to the image.- Second column is the label.- Third/Fourth columns are the upper left corner of bounding box. Coordinates are normalized, between 0 and 1.- Fifth/Sixth/Seventh columns are not used and should be 0.- Eighth/Ninth columns are the lower right corner of the bounding box. Location of Cloud Storage training data.Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
###Code
IMPORT_FILE = 'gs://cloud-samples-data/vision/salads.csv'
###Output
_____no_output_____
###Markdown
Quick peek at your dataYou will use a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.
###Code
if 'IMPORT_FILES' in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
###Output
_____no_output_____
###Markdown
Create the DatasetNext, create the `Dataset` resource using the `create()` method for the `ImageDataset` class, which takes the following parameters:- `display_name`: The human readable name for the `Dataset` resource.- `gcs_source`: A list of one or more dataset index file to import the data items into the `Dataset` resource.- `import_schema_uri`: The data labeling schema for the data items.This operation may take several minutes.
###Code
dataset = aip.ImageDataset.create(
display_name="Salads" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.image.bounding_box,
)
print(dataset.resource_name)
###Output
_____no_output_____
###Markdown
Train the modelNow train an AutoML image object detection model using your Vertex `Dataset` resource. To train the model, do the following steps:1. Create an Vertex training pipeline for the `Dataset` resource.2. Execute the pipeline to start the training. Create and run training pipelineTo train an AutoML image object detection model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLImageTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `prediction_type`: The type task to train the model for. - `classification`: An image classification model. - `object_detection`: An image object detection model.- `multi_label`: If a classification task, whether single (`False`) or multi-labeled (`True`).- `model_type`: The type of model for deployment. - `CLOUD`: Deployment on Google Cloud - `CLOUD_HIGH_ACCURACY_1`: Optimized for accuracy over latency for deployment on Google Cloud. - `CLOUD_LOW_LATENCY_`: Optimized for latency over accuracy for deployment on Google Cloud. - `MOBILE_TF_VERSATILE_1`: Deployment on an edge device. - `MOBILE_TF_HIGH_ACCURACY_1`:Optimized for accuracy over latency for deployment on an edge device. - `MOBILE_TF_LOW_LATENCY_1`: Optimized for latency over accuracy for deployment on an edge device.- `base_model`: (optional) Transfer learning from existing `Model` resource -- supported for image classification only.The instantiated object is the DAG for the training job.
###Code
dag = aip.AutoMLImageTrainingJob(
display_name="salads_" + TIMESTAMP,
prediction_type="object_detection",
model_type="CLOUD",
base_model=None,
)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run()`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `validation_fraction_split`: The percentage of the dataset to use for validation.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 20 minutes.
###Code
model = dag.run(
dataset=dataset,
model_display_name="salads_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=20000,
disable_early_stopping=False
)
###Output
_____no_output_____
###Markdown
Model deployment for batch predictionNow deploy the trained Vertex `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for online prediction.For online prediction, you:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource to the `Endpoint` resource.3. Make online prediction requests to the `Endpoint` resource.For batch-prediction, you:1. Create a batch prediction job.2. The job service will provision resources for the batch prediction request.3. The results of the batch prediction request are returned to the caller.4. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction requestNow do a batch prediction to your deployed model. Get test item(s)Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
###Code
test_items = !gsutil cat $IMPORT_FILE | head -n2
cols_1 = str(test_items[0]).split(',')
cols_2 = str(test_items[1]).split(',')
if len(cols_1) == 11:
test_item_1 = str(cols_1[1])
test_label_1 = str(cols_1[2])
test_item_2 = str(cols_2[1])
test_label_2 = str(cols_2[2])
else:
test_item_1 = str(cols_1[0])
test_label_1 = str(cols_1[1])
test_item_2 = str(cols_2[0])
test_label_2 = str(cols_2[1])
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
###Output
_____no_output_____
###Markdown
Copy test item(s)For the batch prediction, you will copy the test items over to your Cloud Storage bucket.
###Code
file_1 = test_item_1.split('/')[-1]
file_2 = test_item_2.split('/')[-1]
! gsutil cp $test_item_1 $BUCKET_NAME/$file_1
! gsutil cp $test_item_2 $BUCKET_NAME/$file_2
test_item_1 = BUCKET_NAME + "/" + file_1
test_item_2 = BUCKET_NAME + "/" + file_2
###Output
_____no_output_____
###Markdown
Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:- `content`: The Cloud Storage path to the image.- `mime_type`: The content type. In our example, it is an `jpeg` file.For example: {'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'}
###Code
import tensorflow as tf
import json
gcs_input_uri = BUCKET_NAME + '/test.jsonl'
with tf.io.gfile.GFile(gcs_input_uri, 'w') as f:
data = {"content": test_item_1, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + '\n')
data = {"content": test_item_2, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + '\n')
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
###Output
_____no_output_____
###Markdown
Make the batch prediction requestNow that your `Model` resource is trained, you can make a batch prediction by invoking the `batch_request()` method, with the following parameters:- `job_display_name`: The human readable name for the batch prediction job.- `gcs_source`: A list of one or more batch request input files.- `gcs_destination_prefix`: The Cloud Storage location for storing the batch prediction resuls.- `sync`: If set to `True`, the call will block while waiting for the asynchronous batch job to complete.
###Code
batch_predict_job = model.batch_predict(
job_display_name="$(DATASET_ALIAS)_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False
)
print(batch_predict_job)
###Output
_____no_output_____
###Markdown
Wait for completion of batch prediction jobNext, wait for the batch job to complete.
###Code
batch_predict_job.wait()
###Output
_____no_output_____
###Markdown
Get the predictionsNext, get the results from the completed batch prediction job.The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method `iter_outputs()` to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:- `content`: The prediction request.- `prediction`: The prediction response. - `ids`: The internal assigned unique identifiers for each prediction request. - `displayNames`: The class names for each class label. - `confidences`: The predicted confidence of each object, between 0 and 1, per class label. - `bboxes`: The bounding box for each object
###Code
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex dataset object
try:
if delete_dataset and 'dataset' in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if delete_model and 'model' in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if delete_endpoint and 'model' in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if delete_batchjob and 'model' in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
if delete_bucket and 'BUCKET_NAME' in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____ |
fastai_scratch_with_tpu_mnist_4_experiment4.ipynb | ###Markdown
###Code
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
!curl https://course.fast.ai/setup/colab | bash
VERSION = "20200325" #@param ["1.5" , "20200325", "nightly"]
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
!pip freeze | grep torchvision
!pip freeze | grep torch-xla
!pip install fastcore --upgrade
!pip install fastai2 --upgrade
pip install fastai --upgrade
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/My\ Drive/course-v4/
!pwd
!pip install -r requirements.txt
%cd nbs
!pwd
###Output
/content/drive/My Drive/course-v4/nbs
###Markdown
Start of import libraries
###Code
from fastai2.vision.all import *
from utils import *
path = untar_data(URLs.MNIST_SAMPLE)
Path.BASE_PATH = path
path.ls()
###Output
_____no_output_____
###Markdown
Import torch xla libraries
###Code
import torch
import torch_xla
import torch_xla.core.xla_model as xm
OptimWrapper?
class WrapperOpt:
def __init__(self, f):
self.f = f
def __call__(self, *args, **kwargs):
opt = self.f(*args, **kwargs)
optim_wrapper = OptimWrapper(opt)
def my_step():
xm.optimizer_step(opt,barrier=True)
optim_wrapper.step = my_step
return optim_wrapper
def wrap_xla_optim(opt):
w = WrapperOpt(opt)
return w
###Output
_____no_output_____
###Markdown
Get TPU Device
###Code
tpu_dev = xm.xla_device()
tpu_dev
datablock = DataBlock(
blocks=(ImageBlock(cls=PILImageBW),CategoryBlock),
get_items=get_image_files,
splitter=GrandparentSplitter(),
get_y=parent_label,
item_tfms=Resize(28),
batch_tfms=[])
dls = datablock.dataloaders(path,device=tpu_dev)
adam_xla_opt = wrap_xla_optim(Adam)
sgd_xla_opt = wrap_xla_optim(SGD)
learner = cnn_learner(dls, resnet18, metrics=accuracy,
loss_func=F.cross_entropy, opt_func=adam_xla_opt)
from fastai2.callback.tensorboard import *
learner.fit_one_cycle(3)
!pip freeze | grep tensorboard
###Output
_____no_output_____ |
Anita Mburu-WT-21-022-Week -4-Assessment/8.ipynb | ###Markdown
Exercise Notebook (DS) ` Make sure to finish DAY-4 of WEEK-1 before continuing here!!!`
###Code
# this code conceals irrelevant warning messages
import warnings
warnings.simplefilter('ignore', FutureWarning)
###Output
_____no_output_____
###Markdown
Exercise 1: Numpy NumpyNumPy, which stands for Numerical Python, is a library consisting of multidimensional array objects and a collection of routines for processing those arrays. Using NumPy, mathematical and logical operations on arrays can be performed. Operations using NumPy (IMPORTANCE)Using NumPy, a developer can perform the following operations:1. Mathematical and logical operations on arrays. 2. Fourier transforms (In mathematics, a Fourier series (/ˈfʊrieɪ, -iər/) is a periodic function composed of harmonically related sinusoids, combined by a weighted summation. ... The process of deriving the weights that describe a given function is a form of Fourier analysis.) and routines for shape manipulation.3. Operations related to linear algebra. NumPy has in-built functions for linear algebra and random number generation. The most important object defined in NumPy is an N-dimensional array type called ndarray. It describes the collection of items of the same type. Items in the collection can be accessed using a zero-based index. `An instance of ndarray class can be constructed by different array creation routines described later in the tutorial. The basic ndarray is created using an array function in NumPy as follows`
###Code
import numpy
numpy.array
###Output
_____no_output_____
###Markdown
It creates an ndarray from any object exposing array interface, or from any method that returns an array.
###Code
numpy.array(object, dtype = None, copy = True, order = None, subok = False, ndmin = 0)
###Output
_____no_output_____
###Markdown
The above constructor takes the following parameters Sr.No. Parameter & Description:1. objectAny object exposing the array interface method returns an array, or any (nested) sequence.2. dtypeDesired data type of array, optional3. copyOptional. By default (true), the object is copied4. orderC (row major) or F (column major) or A (any) (default)5. subokBy default, returned array forced to be a base class array. If true, sub-classes passed through6. ndminSpecifies minimum dimensions of resultant array Note: All arithmetic operations can be perform on a numpy array
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
`Examples` Operations on Numpy Array
###Code
# Base Ball Player's Heights AS a in 2D
a = np.array([[1,2,3], [4,1,5]])
print (a)
# Addition
a+3
# Multiplication
a*2
# Subtraction
a-2
# Division
a/3
###Output
_____no_output_____
###Markdown
Task1. Write a NumPy program to test whether none of the elements of a given array is zero.
###Code
a = np.array([2,3,1,0,6,7])
a
for index,item in enumerate(a):
if item==0:
print('Zero value found at Index',index)
else:
print(item," is not zero")
###Output
2 is not zero
3 is not zero
1 is not zero
Zero value found at Index 3
6 is not zero
7 is not zero
###Markdown
2. Write a NumPy program to test whether any of the elements of a given array is non-zero.
###Code
import numpy as np
a = np.array([10,33,56,89,0,3,8,9,0,6])
a
a = np.array([10,33,56,89,0,3,8,9,0,6])
print("Original array")
print(a)
print("Test whether any of the elements of a given array is non-zero")
print(np.any(a))
a = np.array([10,33,56,89,0,3,8,9,0,6])
print("Original array:")
print(a)
print("Test whether any of the elements of a give array is non-zero")
print(np.any(a))
for index,item in enumerate(a):
if item==0:
print('Zero value found at Index',index)
else:
print(item," is not zero")
###Output
10 is not zero
33 is not zero
56 is not zero
89 is not zero
Zero value found at Index 4
3 is not zero
8 is not zero
9 is not zero
Zero value found at Index 8
6 is not zero
|
docs/1_qpu_db.ipynb | ###Markdown
Tutorial: QPU Database**This tutorial requires version >=0.0.5 of the QPU DB** Using the QPU DBThe QPU database is a permanent store built for storing calibration data for Quantum Processing Units (QPU).It provides the following features and benefits:* Persistent storage of any python object related to QPU calibration info* Metadata on parameter calibration state and last modified time* Convenient addressing of quantum elements* Easy revert to previously stored parametersIn this short tutorial we will learn how to use the QPU DB by looking at a simplified example of a QPU with two superconductingqubits, two readout resonators and a parametric coupling element. Creating the databaseBelow we can see a simple usage example. The DB is created by calling the `create_new_database` method.This method is similar to initializing a git repo in the sense that we only do it once. Here we initialize itwith an initial dictionary which contains some basic attributes of our QPU. We'll be able to add more attributes,and also elements, later on. Once we call `create_new_qpu_database`, a set of database files will be created for us atthe working directory of the python script.These files are the persistent storage of our DB. They can be saved to a different location by specifyingthe `path` argument to the function.
###Code
# %load_ext autoreload
# %autoreload 2
from entropylab_qpudb import create_new_qpu_database, CalState, QpuDatabaseConnection
initial_dict = {
'q1': {
'f01': 5.65e9 # an initial guess for our transition frequency
},
'q2': {
'f01': 5.25e9
},
'res1': {
'f_r': 7.1e9
},
'res2': {
'f_r': 7.3e9
},
'c1_2': {
'f_r': 0.4e9
}
}
create_new_qpu_database('db1', initial_dict, force_create=True)
###Output
_____no_output_____
###Markdown
Notes:1. here we allow for the possibility of overwriting an existing databaseby passing the `force_create=True` flag. This option is useful when experimenting with the database creation, however incommon usage it is recommended to remove this flag, since when it's false (by default), it will prevent overwriting an existingdatabase and deleting all the data stored in it.2. (For experts): if you need to create a DB server, rather than create a filesystem storage, please let us know.The DB backend is currentlythe [ZODB](https://zodb.org/en/latest/) database, with plans to be replaced by[gitdb](https://github.com/gitpython-developers/gitdb).The keys of `initial_dict` are called the *elements* (and are similar in nature to QUA's quantum elements), and thevalues of these elements are subdictionaries of *attributes*. The values of the attributes can be anything you like,or more accurately, any python object that can be pickled. The different elements need not have the same attributes. Connecting to the database and basic usageNow create a connection to our DB. The connection to the DB is our main "workhorse" - we create the DB once, andwhenever we want to connect to it in order to retrieve or store data, we open a connection object. Note that currentlyonly a single connection object per DB is allowed.
###Code
db1 = QpuDatabaseConnection('db1')
###Output
opening qpu database db1 from commit <timestamp: 05/30/2021 06:24:19, message: initial commit> at index 0
###Markdown
and let's view the contents of our DB by calling `print`:
###Code
db1.print()
###Output
q1
----
f01: QpuParameter(value=5400000000.0, last updated: 05/30/2021 09:24:45, calibration state: COARSE)
q2
----
f01: QpuParameter(value=5250000000.0, last updated: 05/30/2021 09:24:19, calibration state: UNCAL)
res1
----
f_r: QpuParameter(value=7100000000.0, last updated: 05/30/2021 09:24:19, calibration state: UNCAL)
res2
----
f_r: QpuParameter(value=7300000000.0, last updated: 05/30/2021 09:24:19, calibration state: UNCAL)
c1_2
----
f_r: QpuParameter(value=400000000.0, last updated: 05/30/2021 09:24:19, calibration state: UNCAL)
###Markdown
Congratulations! You've just created your first QPU DB. As you can see when calling `print` the values we enteredin `initial_dict` are now objects of type `QpuParameter`. These objects have 3 attributes:* `value`: the value you created initially and can be any python object* `last_updated`: the time when this parameter was last updated (see *committing* section to understand how toupdate). This parameter is handled by the DB itself.* `cal_state`: an enumerated metadata that can take the values `UNCAL`, `COARSE`, `MED` and `FINE`. Thiscan be used by the user to communicate what is the calibration level of these parameters. They can be set and queriedduring the script execution, but are not used by the DB itself. Modifying and using QPU parametersWe can use and modify values and calibration states of QPU parameters in two different ways: Using `get` and `set`let's modify the value of `f01` and then get the actual value:
###Code
db1.set('q1', 'f01', 5.33e9)
db1.get('q1', 'f01').value
###Output
_____no_output_____
###Markdown
We can also modify the calibration state when setting:
###Code
db1.set('q1', 'f01', 5.36e9, CalState.COARSE)
###Output
_____no_output_____
###Markdown
To get the full `QpuParameter` object we can omit `.value`. We can see that the cal state and modification date were updated.
###Code
db1.get('q1', 'f01')
#db1.get('q1', 'f01').cal_state
###Output
_____no_output_____
###Markdown
Note that we can't modify the value by assigning to value directly - this will raise an exception. Using resolved namesThe names we chose for the elements, namely `'q1'`, `'res1'` and `'c1_2'` have a special significance. If we follow thisconvention of naming qubit elements with the format 'q'+number, resonators with the format 'res'+numberand couplers with the format 'c'+number1+'_'+number2, as shown above, this allows us to get and set values in a moreconvenient way:
###Code
print(db1.q(1).f01.value)
print(db1.res(1).f_r.value)
print(db1.coupler(1, 2).f_r.value)
print(db1.coupler(2, 1).f_r.value)
###Output
5360000000.0
7100000000.0
400000000.0
400000000.0
###Markdown
while this method basically syntactic sugar, it allows us to conveniently address elements by indices, which is useful whenworking with multiple qubit systems, and especially with couplers. We can also set values using this resolved addressing method:
###Code
db1.update_q(1, 'f01', 5.4e9)
db1.q(1).f01
###Output
_____no_output_____
###Markdown
Note: This default mapping between integer indices and strings can be modified by subclassing the`Resolver` class found under `entropylab_qpudb._resolver.py`. Committing (saving to persistent storage) and viewing historyEverything we've done so far did not modify the persistent storage. In order to do this, we need to *commit* the changes we made.This allows us to control at which stages we want to make aggregated changes to the database.Let's see how this is done. We need to call `commit`, and specify an optional commit message:
###Code
db1.update_q(1, 'f01', 6.e9)
db1.commit('a test commit')
###Output
commiting qpu database db1 with commit <timestamp: 05/30/2021 06:26:20, message: a test commit> at index 1
###Markdown
Now the actual file was changed. To see this, we need to close the db. We can then delete db1,and when re-opening the DB we'll see f01 of q1 has the modified value.
###Code
db1.close()
del db1
db1 = QpuDatabaseConnection('db1')
db1.q(1).f01
###Output
closing qpu database db1
closing qpu database db1
opening qpu database db1 from commit <timestamp: 05/27/2021 06:44:34, message: a test commit> at index 1
###Markdown
Note that the commit was saved with an index. This index can be later used to revert to a [previous state](reverting-to-a-previous-state).To view a history of all the commits, we call `get_history`.Note that the timestamps of the commits are in UTC time.
###Code
db1.get_history()
###Output
_____no_output_____
###Markdown
Adding attributes and elementsIn many cases you realize while calibrating your system that you want to add attributes that did not exist in the initialdictionary, or even new elements. This is easy using the `add_element` and `add_attribute` methods.Let's see an example for `add_attribute`:
###Code
db1.add_attribute('q1', 'anharmonicity')
print(db1.q(1).anharmonicity)
db1.update_q(1, 'anharmonicity', -300e6, new_cal_state=CalState.COARSE)
print(db1.q(1).anharmonicity)
###Output
QpuParameter(None)
QpuParameter(value=-300000000.0, last updated: 05/30/2021 09:26:25, calibration state: COARSE)
###Markdown
Reverting to a previous stateMany times when we work on bringing up a QPU, we reach a point where everything is calibrated properly and our measurementsand calibrations give good results. We want to be able to make additional changes, but to possibly revert to the good stateif things go wrong. We can do this using `restore_from_history`. We simply need to provide it with the historyindex to which we want to return:
###Code
db1.restore_from_history(0)
print(db1.q(1).f01)
assert db1.q(1).f01.value == initial_dict['q1']['f01']
###Output
opening qpu database db1 from commit <timestamp: 05/30/2021 06:24:19, message: initial commit> at index 0
QpuParameter(value=5650000000.0, last updated: 05/30/2021 09:24:19, calibration state: UNCAL)
###Markdown
Calling this method will replace the current working DB with the DB that was stored in the commit with the indexsupplied to `restore_from_history`. The new values will not be committed. It is possible to modify the values andcommit them as usual. Next stepsWhile the QPU DB is a standalone tool, it is designed with QUA calibration node framework in mind.In the notebook called `2_qubit_graph_calibration.ipynb` we explore how the QUA calibration nodes framework can be usedto generate calibration graphs. Remove DB filesTo remove the DB files created in your workspace for the purpose of this demonstration, first close the db connection:
###Code
db1.close()
###Output
closing qpu database db1
###Markdown
then run this cell:
###Code
from glob import glob
import os
for fl in glob("db1*"):
os.remove(fl)
###Output
_____no_output_____ |
Damarad_Viktor/thermodynamics_practice.ipynb | ###Markdown
Термодинамические параметры. Газовые законы* [Микро- и макропараметры состояния газа](Микро--и-макропараметры-состояния-газа)* [Основное уравнение МКТ](Основное-уравнение-МКТ)* [Температура. Абсолютная температура](Температура.-Абсолютная-температура)* [Модель идеального газа](Модель-идеального-газа)* [Уравнение Менделеева-Клапейрона](Уравнение-Менделеева-–-Клапейрона-(уравнение-состояния-идеального-газа))* [Связь температуры со средней кинетической энергией молекул вещества](Связь-температуры-со-средней-кинетической-энергией-молекул-вещества)* [Определение первого закона термодинамики](Определение-первого-закона-термодинамики)* [Первый закон термодинамики в процессах](Первый-закон-термодинамики-в-процессах)* [Применение](Применение)* [Функции распределения](Функции-распределения)* [Распределение Максвелла](Распределение-Максвелла)* [Распределение Больцмана](Распределение-Больцмана)* [Распределение Максвелла-Больцмана](Распределение-Максвелла-Больцмана)**Термодинамика** — раздел физики, в котором изучаются процессы изменения и превращения внутренней энергии тел, а также способы использования внутренней энергии тел в двигателях. Собственно, именно с анализа принципов первых тепловых машин, паровых двигателей и их эффективности и зародилась термодинамика. Можно сказать, что этот раздел физики начинается с небольшой, но очень важно работы молодого французского физика Николя Сади Карно. Микро- и макропараметры состояния газаСистема, состоящая из большого числа молекул, называется макросистемой. Макросистема, отделенная от внешних тел стенками с постоянными свойствами, после длительного промежутка времени приходит в равновесное состояние. Это состояние можно описать рядом параметров, называемых *параметрами состояния*. Различают *микропараметры* и *макропараметры* состояния.К микропараметрам состояния можно отнести следующие физические величины: массу $m_0$ молекул, их скорость, среднюю квадратичную скорость молекул, среднюю кинетическую энергию молекул, среднее время между соударениями молекул, длину их свободного пробега и др. Это такие параметры, которые можно отнести и к одной молекуле макросистемы.Макропараметры состояния характеризуют только равновесную систему в целом. К ним относятся объем $V$, давление $P$, температура $T$, плотность $\rho$, концентрация $n$, внутренняя энергия $U$, электрические, магнитные и оптические параметры. Значения этих параметров могут быть установлены с помощью измерительных приборов.Молекулярно-кинетическая теория идеального газа устанавливает соответствие между микропараметрами и макропараметрами газа.**Таблица. Mикропараметры состояния**|Параметр | Обозначение | Единицы в СИ ||:----------------------------------------------------|:----------------:|:--------------:||Масса молекулы | $m_0$ | $кг$ ||Скорость молекулы | $v$ | $м/c$ ||Cредняя квадратичная скорость движения молекул |$\overline v_{кв}$| $м/c$ ||Средняя кинетическая энергия поступательного движения|$\overline E_{к}$ | $Дж$ |**Таблица. Макропараметры состояния**|Параметр |Обозначение| Единицы в СИ |Способ измерения (косвенный способ)||:-----------|:-------------:|:---------------:|:-------------------------------:||Масса газа |$M$ |$кг$|Весы||Объем сосуда| $V$ |$м^3$|Мерный цилиндр с водой\\измерение размеров и расчет по формулам геометрии||Давление |$P$ |$Па$|Манометр||Температура| $T$ |$К$|Термометр||Плотность | $\rho$|$кг/м^3$|Измерение массы, объема и расчет||Концентрация| $n$ |$1/м^3 = м^{-3}$ |Измерение плотности и расчет с учетом молярной массы||Cостав (молярная масса и соотношение количеств)|$М_1$, $М_2$, $\frac{n_1}{n_2}$ |$\frac{кг}{моль}$, $безразмерная$|Приготовление газа смешением заданных масс или объемов| Основное уравнение молекулярно-кинетической теории идеального газаЭто уравнение связывает макропараметры системы – давление $P$ и концентрацию молекул $n=\frac{N}{V}$ с ее микропараметрами – массой молекул, их средним квадратом скорости или средней кинетической энергией:$$p=\frac{1}{3}nm_0\overline{v^2} = \frac{2}{3}n\overline{E_k}$$Вывод этого уравнения основан на представлениях о том, что молекулы идеального газа подчиняются законам классической механики, а давление – это отношение усредненной по времени силы, с которой молекулы бьют по стенке, к площади стенки.Пропорциональность силы, с которой молекулы воздействуют на стенку, их концентрации, массе и скорости каждой молекулы качественно понятны. Квадратичный рост давления со скоростью связан с тем, что от скорости зависит не только сила отдельного удара, но и частота соударений молекул со стенкой.Учитывая связь между концентрацией молекул в газе и его плотностью $(\rho = nm_0)$, можно получить еще одну форму основного уравнения МКТ идеального газа:$$p=\frac{1}{3}\rho\overline{v^2}$$ Температура. Абсолютная температура**Рис. 2. Жидкостные термометры**При контакте двух макросистем, каждая из которых находится в равновесии, например, при открывании крана между двумя теплоизолированными сосудами с газом или контакте их через теплопроводящую стенку, равновесие нарушается. Через большой промежуток времени в частях объединенной системы устанавливаются новые значения параметров системы. Если говорить только о макропараметрах, то выравниваются температуры тел.Понятие «температура» было введено в физику в качестве физической величины, характеризующей степень нагретости тела не по субъективным ощущениям экспериментатора, а на основании объективных показаний физических приборов.*Термометр* – прибор для измерения температуры, действие которого основано на взаимно-однозначной связи наблюдаемого параметра системы (давления, объема, электропроводности, яркости свечения и т. д.) с температурой (рис. 2).Считается, что если этот вторичный параметр (например, объем ртути в ртутном термометре) при длительном контакте с одним телом и при длительном контакте с другим телом одинаков, то это значит, что равны температуры этих двух тел. В экспериментах по установлению распределения молекул по скоростям было показано, что это распределение зависит только от степени нагретости тела, измеряемой термометром. В современной статистической физике характер распределения частиц системы по энергиям характеризует ее температуру.Для калибровки термометра необходимы тела, температура которых считается неизменной и воспроизводимой. Обычно это температура равновесной системы лед – вода при нормальном давлении $(0 °С)$ и температура кипения воды при нормальном давлении $(100 °С)$.В СИ температура выражается в кельвинах $(К)$. По этой шкале $0 °С = 273,15 К$ и $100 °С = 373,15 К$. В обиходе используются и другие температурные шкалы. Модель идеального газаИдеальный газ – это модель разреженного газа, в которой пренебрегается взаимодействием между молекулами. Силы взаимодействия между молекулами довольно сложны. На очень малых расстояниях, когда молекулы вплотную подлетают друг к другу, между ними действуют большие по величине силы отталкивания. На больших или промежуточных расстояниях между молекулами действуют сравнительно слабые силы притяжения. Если расстояния между молекулами в среднем велики, что наблюдается в достаточно разреженном газе, то взаимодействие проявляется в виде относительно редких соударений молекул друг с другом, когда они подлетают вплотную. В идеальном газе взаимодействием молекул вообще пренебрегают.Теория создана немецким физиком Р. Клаузисом в 1857 году для модели реального газа, которая называется идеальный газ. Основные признаки модели:* расстояния между молекулами велики по сравнению с их размерами;* взаимодействие между молекулами на расстоянии отсутствует;* при столкновениях молекул действуют большие силы отталкивания;* время столкновения много меньше времени свободного движения между столкновениями;* движения подчиняются законам Ньютона;* молекулы - упругие шары;* силы взаимодействия возникают при столкновении.Границы применимости модели идеального газа зависят от рассматриваемой задачи. Если необходимо установить связь между давлением, объемом и температурой, то газ с хорошей точностью можно считать идеальным до давлений в несколько десятков атмосфер. Если изучается фазовый переход типа испарения или конденсации или рассматривается процесс установления равновесия в газе, то модель идеального газа нельзя применять даже при давлениях в несколько миллиметров ртутного столба.Давление газа на стенку сосуда является следствием хаотических ударов молекул о стенку, вследствие их большой частоты действие этих ударов воспринимается нашими органами чувств или приборами как непрерывная сила, действующая на стенку сосуда и создающая давление.Пусть одна молекула находится в сосуде, имеющем форму прямоугольного параллелепипеда (см. рис. 1). Рассмотрим, например, удары этой молекулы о правую стенку сосуда, перпендикулярную оси $x$. Считаем удары молекулы о стенки абсолютно упругими, тогда угол отражения молекулы от стенки равен углу падения, а величина скорости в результате удара не изменяется. В нашем случае при ударе проекция скорости молекулы на ось $y$ не изменяется, а проекция скорости на ось $x$ меняет знак. Таким образом, проекция импульса изменяется при ударе на величину, равную $-2mv_x$, знак «-» означает, что проекция конечной скорости отрицательна, а проекция начальной – положительна.Определим число ударов молекулы о данную стенку за 1 секунду. Величина проекции скорости не изменяется при ударе о любую стенку, т.е. можно сказать, что движение молекулы вдоль оси $x$ равномерное. За 1 секунду она пролетает расстояние, равное проекции скорости $v_x$. От удара до следующего удара об эту же стенку молекула пролетает вдоль оси $x$ расстояние, равное удвоенной длине сосуда $2L$. Поэтому число ударов молекулы о выбранную стенку равно $\frac{v_x}{2L}$. Согласно 2-му закону Ньютона средняя сила равна изменению импульса тела за единицу времени. Если при каждом ударе о стенку частица изменяет импульс на величину $2mv_x$, а число ударов за единицу времени равно $\frac{v_x}{2L}$, то средняя сила, действующая со стороны стенки на молекулу (равная по величине силе, действующей на стенку со стороны молекулы), равна $f=\frac{2mv_x^2}{L}$, а среднее давление молекулы на стенку равно $p=\frac{f}{S}=\frac{mv_x^2}{LS}=\frac{mv_x^2}{V}$, где $V$ – объем сосуда.Если бы все молекулы имели одинаковую скорость, то общее давление получалось бы просто умножением этой величины на число частиц $N$, т.е. $p=\frac{Nmv_x^2}{V}$. Но поскольку молекулы газа имеют разные скорости, то в этой формуле будет стоять среднее значение квадрата скорости, тогда формула примет вид: $p=\frac{Nm}{V}$.Квадрат модуля скорости равен сумме квадратов ее проекций, это имеет место и для их средних значений: $=++$. Вследствие хаотичности теплового движения средние значения всех квадратов проекций скорости одинаковы, т.к. нет преимущественного движения молекул в каком-либо направлении. Поэтому $=3$, и тогда формула для давления газа примет вид: $p=\frac{Nmv^2}{3V}$. Если ввести кинетическую энергию молекулы $E_k=\frac{mv^2}{2}$, то получим $p=\frac{2N}{3V}$, где $$ - средняя кинетическая энергия молекулы. Уравнение Менделеева – Клапейрона (уравнение состояния идеального газа)В результате экспериментальных исследований многих ученых было установлено, что макропараметры реальных газов не могут изменяться независимо. Они связаны уравнением состояния:$$PV = \nu RT$$Где $R = 8,31 Дж/(K·моль)$ – универсальная газовая постоянная, $\nu = \frac{m}{M}$, где $m$ – масса газа и $M$ – молярная масса газа. Уравнение Менделеева – Клапейрона называют *уравнением состояния*, поскольку оно связывает функциональной зависимостью *параметры состояния*. Его записывают и в других видах:$$pV = \frac{m}{M}RT$$$$p=\frac{\rho}{M}RT$$Пользуясь уравнением состояния, можно выразить один параметр через другой и построить график первого из них, как функции второго.Графики зависимости одного параметра от другого, построенные при фиксированных температуре, объеме и давлении, называют соответственно *изотермой*, *изохорой* и *изобарой*. Например, зависимость давления $P$ от температуры $T$ при постоянном объеме $V$ и постоянной массе $m$ газа – это функция $p(T)=\frac{mR}{MV}T = kT$, где $K$ – постоянный числовой множитель. Графиком такой функции в координатах $P$, $Т$ будет прямая, идущая от начала координат, как и графиком функции $y(x)=kx$ в координатах $y, x$ (рис. 3).Зависимость давления $P$ от объема $V$ при постоянной массе $m$ газа и температуре $T$ выражается так:$$p(V)=\frac{mRT}{M}\cdot{\frac{1}{V}}=\frac{k_1}{V},$$Где $k_1$ – постоянный числовой множитель. График функции $y(x)=\frac{k_1}{x}$ в координатах $y$, $x$ представляет собой гиперболу, так же как и график функции $p(V)=\frac{k_1}{V}$ в координатах $P$, $V$.Рассмотрим частные газовые законы. При постоянной температуре и массе следует, что $pV=const$, т.е. при постоянной температуре и массе газа его давление обратно пропорционально объему. Этот закон называется *законом Бойля-Мариотта*, а процесс, при котором температура постоянна, называется изотермическим.Для изобарного процесса, происходящего при постоянном давлении, следует, что $V=(\frac{m}{pM}R)T$, т.е. объем пропорционален абсолютной температуре. Этот закон называют *законом Гей-Люссака*.Для изохорного процесса, происходящего при постоянном объеме, следует, что $p=(\frac{m}{VM}R)T$, т.е. давление пропорционально абсолютной температуре. Этот закон называют *законом Шарля*.Эти три газовых закона, таким образом, являются частными случаями уравнения состояния идеального газа. Исторически они сначала были открыты экспериментально, и лишь значительно позднее получены теоретически, исходя из молекулярных представлений. Связь температуры со средней кинетической энергией молекул веществаКоличественное соотношение между температурой $T$ (макропараметром) системы и средней кинетической энергией описание: $\overline{E_k}$ (микропараметром) молекулы идеального газа может быть выведено из сопоставления основного уравнения МКТ идеального газа описание: $p=\frac{2}{3}n\overline{E_k}$ и уравнения состояния $p=\frac{\nu RT}{V} = nkT$, где описание: $k=\frac{R}{N_A}=1.38*10^{-23}\ Дж/К$ – постоянная Больцмана. Сопоставляя два выражения для давления, получим$$\overline{E_k}=\frac{3}{2}kT$$Средняя кинетическая энергия молекул идеального газа пропорциональна температуре газа. Если молекулы газа образованы двумя, тремя и т. д. атомами, то доказывается, что это выражение связывает только энергию поступательного движения молекулы в целом и температуру.С учетом этого соотношения на уровне микро — и макропараметров макросистемы можно утверждать, что в *cостоянии теплового равновесия* двух систем выравниваются температуры и в случае идеального газа средние кинетические энергии молекул Определение первого закона термодинамики Самым важным законом, лежащим в основе термодинамики является первый закон или первое начало термодинамики. Чтобы понять суть этого закона, для начала, вспомним что называется внутренней энергией. **Внутренняя энергия тела** — это энергия движения и взаимодействия частиц, из которых оно состоит. Нам хорошо известно, что внутреннюю энергию тела можно изменить, изменив температуру тела. А изменять температуру тела можно двумя способами:1. совершая работу (либо само тело совершает работу, либо над телом совершают работу внешние силы); 2. осуществляя теплообмен — передачу внутренней энергии от одного тела к другому без совершения работы. Нам, также известно, что работа, совершаемая газом, обозначается $А_r$, а количество переданной или полученной внутренней энергии при теплообмене называется количеством теплоты и обозначается $Q$. Внутреннюю энергию газа или любого тела принято обозначать буквой $U$, а её изменение, как и изменение любой физической величины, обозначается с дополнительным знаком $Δ$, то есть $ΔU$.Сформулируем **первый закон термодинамики** для газа. Но, прежде всего, отметим, что когда газ получает некоторое количество теплоты от какого-либо тела, то его внутренняя энергия увеличивается, а когда газ совершает некоторую работу, то его внутренняя энергия уменьшается. Именно поэтому первый закон термодинамики имеет вид: $$ΔU = Q — A_r$$Так как работа газа и работа внешних сил над газом равны по модулю и противоположны по знаку, то первый закон термодинамики можно записать в виде: $$ΔU = Q + A_{внеш}.$$Понять суть этого закона довольно просто, ведь изменить внутреннюю энергию газа можно двумя способами: либо заставить его совершить работу или совершить над ним работу, либо передать ему некоторое количество теплоты или отвести от него некоторое количество теплоты. Первый закон термодинамики в процессах Применительно к изопроцессам первый закон термодинамики может быть записан несколько иначе, учитывая особенности этих процессов. Рассмотрим три основных изопроцесса и покажем, как будет выглядеть формула первого закона термодинамики в каждом из них. 1. Изотермический процесс — это процесс, происходящий при постоянной температуре. С учётом того, что количество газа также неизменно, становится ясно, что так как внутренняя энергия зависит от температуры и количества газа, то в этом процессе она не изменяется, то есть $U = const$, а значит $ΔU = 0$, тогда первый закон термодинамики будет иметь вид: $Q = A_r$. 2. Изохорный процесс — это процесс, происходящий при постоянном объёме. То есть в этом процессе газ не расширяется и не сжимается, а значит не совершается работа ни газом, ни над газом, тогда $А_r = 0$ и первый закон термодинамики приобретает вид: $ΔU = Q$. 3. Изобарный процесс — это процесс, при котором давление газа неизменно, но и температура, и объём изменяются, поэтому первый закон термодинамики имеет самый общий вид: $ΔU = Q — А_r$. 4. Адиабатический процесс — это процесс, при котором теплообмен газа с окружающей средой отсутствует (либо газ находится в теплоизолированном сосуде, либо процесс его расширения или сжатия происходит очень быстро). То есть в таком процессе газ не получает и не отдаёт количества теплоты и $Q = 0$. Тогда первый закон термодинамики будет иметь вид: $ΔU = -А_r$. Применение Первое начало термодинамики (первый закон) имеет огромное значение в этой науке. Вообще понятие внутренней энергии вывело теоретическую физику 19 века на принципиально новый уровень. Появились такие понятия как термодинамическая система, термодинамическое равновесие, энтропия, энтальпия. Кроме того, появилась возможность количественного определения внутренней энергии и её изменения, что в итоге привело учёных к пониманию самой природы теплоты, как формы энергии. Ну, а если говорить о применении первого закона термодинамики в каких-либо задачах, то для этого необходимо знать два важных факта. Во-первых, внутренняя энергия идеального одноатомного газа равна: $U=\frac{3}{2}\nu RT$, а во-вторых, работа газа численно равна площади фигуры под графиком данного процесса, изображённого в координатах $p-V$. Учитывая это, можно вычислять изменение внутренней энергии, полученное или отданное газом количество теплоты и работу, совершённую газом или над газом в любом процессе. Можно также определять коэффициент полезного действия двигателя, зная какие процессы в нём происходят. Функции распределенияВ качестве основной функции, применяемой при статистическом методе описания, выступает функция распределения, которая определяет статистические характеристики рассматриваемой системы. Знание её изменения с течением времени позволяет описывать поведение системы со временем. Функция распределения дает возможность рассчитывать все наблюдаемые термодинамические параметры системы.Для введения понятия функции распределения сначала рассмотрим какую-либо макроскопическую систему, состояние которой описывается некоторым параметром $x$, принимающим $K$ дискретных значений: $x_1,x_2,x_3,...,x_K$. Пусть при проведении над системой $N$ измерений были получены следующие результаты: значение $x_1$ наблюдалось при $N_1$ измерениях, значение $x_2$ наблюдалось соответственно при $N_2$ измерениях и т.д. При этом, очевидно, что общее число измерений $N$ равняется сумме всех измерений $N_i$ , в которых были получены значения $x_i$:$$N=\sum_{i=1}^K N_i$$Увеличение числа проведенных экспериментов до бесконечности приводит к стремлению отношения $\frac{N_i}{N}$ к пределу$$\tag{10.1} P(x_i)=\lim_{N\to\infty}\frac{N_i}{N}$$Величина $P(x_i)$ называется вероятностью измерения значения $x_i$.Вероятность $P(x_i)$ представляет собой величину, которая может принимать значения в интервале $0\le P(x_i)\le1$. Значение $P(x_i)=0$ соответствует случаю, когда ни при одном измерении не наблюдается значение $x_i$ и, следовательно, система не может иметь состояние, характеризующееся параметром $x_i$. Соответственно вероятность $P(x_i)=1$ возможна только, если при всех измерениях наблюдалось только значение $x_i$. В этом случае, система находится в детерминированном состоянии с параметром $x_i$.Сумма вероятностей $P(x_i)$ нахождения системы во всех состояниях с параметрами $x_i$ равна единице:$$\tag{10.2} \sum_{i=1}^{K}P(x_i)=\frac{\sum_{i=1}^{K}N_i}{N} = \frac{N}{N}=1$$Условие $(10.2)$ указывает на достаточно очевидный факт, что если набор возможных дискретных значений $x_i$, $i=1,2,...K$, является полным (то есть включает все возможные значения параметра $x$ в соответствии с условиями физической задачи), то при любых измерениях параметра $x$ должны наблюдаться значения этого параметра только из указанного набора $x_i$.Рассмотренный нами случай, когда параметр, характеризующий систему, принимает набор дискретных значений не является типичным при описании макроскопических термодинамических систем. Действительно, такие параметры как температура, давление, внутренняя энергия и т.д., обычно принимают непрерывный ряд значений. Аналогично и переменные, характеризующие движение микрочастиц (координата и скорость), изменяются для систем, описываемых классической механикой, непрерывным образом.Поэтому рассмотрим статистическое описание, применимое для случая, когда измеренный параметр $x_i$ может иметь любые значения в некотором интервале $a\le x\le b$. Причем, указанный интервал может быть и не ограниченным какими либо конечными значениями $a$ и $b$. В частности параметр $x$ в принципе может изменяться от $-\infty$ до $+\infty$, как, например, координаты молекулы газа для случая неограниченной среды.Пусть в результате измерений было установлено, что величина $x$ с вероятностью $dP(x)$ попадает в интервал значений от $x$ до $x+dx$. Тогда можно ввести функцию $f(x)$, характеризующую плотность распределения вероятностей:$$\tag{10.3} f(x)=\frac{dP(x)}{dx}$$Эта функция в физике обычно называется функцией распределения.Функция распределения $f(x)$ должна удовлетворять условию: $f(x) \ge 0$, так как вероятность попадания измеренного значения в интервал от $x$ до $x+dx$ не может быть отрицательной величиной. Вероятность того, что измеренное значение попадет в интервал $x_1\le x\le x_2$ равна$$\tag{10.4} P(x_1\le x\le x_2)=\int_{x_1}^{x_2}f(x)dx$$Соответственно, вероятность попадания измеренного значения в весь интервал возможных значений $a\le x\le b$ равна единице:$$\tag{10.5} \int_{a}^{b}f(x)dx=1$$Выражение $(10.5)$ называется условием нормировки функции распределения.Функция распределения $f(x)$ позволяет определить среднее значение любой функции $\phi(x)$:$$\tag{10.6} =\int_{a}^{b}\phi(x)f(x)dx$$В частности по формуле $(10.6)$ может быть найдено среднее значение параметра $x$:$$\tag{10.7} =\int_{a}^{b}xf(x)dx$$Если состояние системы характеризуется двумя параметрами $x$ и $y$, то вероятность её нахождения в состоянии со значениями этих параметров в интервалах $x_1\le x\le x_2$ и $y_1\le x\le y_2$ соответственно равна$$\tag{10.8} P(x_1\le x\le x_2, y_1\le x\le y_2)=\int_{x_1}^{x_2}\int_{y_1}^{y_2}f(x,y)dxdy$$где $f(x, y)$ - двумерная функция распределения. Примером такой функции может служить совместное распределение для координат и скоростей молекул газа.Соответственно для бесконечно малых интервалов $dx$ и $dy$ вероятность $dP(x, y)$ можно представить в виде$$\tag{10.9}dP(x, y) = f(x, y)dxdy$$В случае статистической независимости значений параметров $x$ и $y$ друг от друга двумерная функция распределений $f(x, y)$ равна произведению функций распределения $f(x)$ и $f(y)$:$$\tag{10.10} f(x, y)=f(x)f(y)$$Это свойство функций распределения будет нами использовано при рассмотрении распределения Максвелла-Больцмана. Распределение Максвелла Функция распределения МаксвеллаПусть имеется n тождественных молекул, находящихся в состоянии беспорядочного теплового движения при определенной температуре. После каждого акта столкновения между молекулами их скорости меняются случайным образом. В результате невообразимо большого числа столкновений устанавливается стационарное равновесное состояние, когда число молекул в заданном интервале скоростей сохраняется постоянным.Распределение молекул идеального газа по скоростям впервые было получено знаменитым английским ученым Дж. Максвеллом в 1860 г. с помощью методов теории вероятностей.**Функция распределения Максвелла характеризует распределение молекул по скоростям** и определяется отношением кинетической энергии молекулы $\frac{mv^2}{2}$ к средней энергии её теплового движения $kT$:$$f(v)=\frac{dn}{ndv}=\frac{4}{\sqrt\pi}(\frac{m}{2kT})^{\frac{3}{2}}\exp(-\frac{mv^2}{2kT})v^2$$Эта функция обозначает долю молекул единичного объёма газа, абсолютные скорости которых заключены в интервале скоростей от $v$ до $v + Δv$, включающем данную скорость.Обозначим множитель перед экспонентой через $А$, тогда из уравнения получим окончательное выражение **функции распределения Максвелла**:$$f(v)=Aexp(-\frac{mv^2}{2kT})v^2$$График этой функции показан на рисунке 3.2.1: Средние скорости распределения МаксвеллаИз графика функции распределения Максвелла, приведенного на рисунке 3.2.1, видно, что **наиболее вероятная скорость** - *скорость, на которую приходится максимум зависимости*.* *Наиболее вероятная скорость молекулы*$v_{вер}=\sqrt{\frac{2kT}{m}}$, для одного моля газа $v_{вер}=\sqrt{\frac{2RT}{M}}$* *Среднеарифметическая скорость молекул*$=\sqrt{\frac{8kT}{\pi m}}$, для одного моля газа $=\sqrt{\frac{8RT}{\pi M}}$* *Среднеквадратичная скорость молекулы*$_{кв}=\sqrt{\frac{3kT}{m}}$, для одного моля газа $_{кв}=\sqrt{\frac{3RT}{M}}$ Зависимость функции распределения Максвелла от массы молекул и температуры газаНа рисунке 3.2.2 показано, что при увеличении массы молекул $(m_1 > m_2 > m_3)$ и при уменьшении температуры $(T_1 < T_2 < T_3)$ максимум функции распределения Максвелла смещается вправо, в сторону увеличения скоростей.*Площадь под кривой* - *величина постоянная*, равная единице, поэтому важно знать, как будет изменяться положение максимума кривой:$f(v)\approx\sqrt{\frac{m}{T}}$, кроме того, $v\approx\sqrt{\frac{T}{m}}$.Выводы:• Вид распределения молекул газа по скоростям **зависит от рода газа и от температуры**. Давление $P$ и объём газа $V$ на распределение молекул не влияют.• В показателе степени $f(v)$ стоит отношение кинетической энергии, соответствующей данной скорости, к средней энергии теплового движения молекул; значит, **распределение Максвелла характеризует распределение молекул по значениям кинетической энергии**.• **Максвелловский закон - статистический**, и выполняется тем лучше, чем больше число молекул. Формула Максвелла для относительных скоростейОтносительную скорость обозначим через $u=\frac{v}{v_{вер}}$. Тогда получим **закон распределения Максвелла** в приведенном виде:$$f(u)=\frac{dn}{ndu}=\frac{4}{\sqrt\pi}\exp(-u^2)u^2$$Это уравнение универсальное. В таком виде *функция распределения не зависит ни от рода газа, ни от температуры*. Барометрическая формулаАтмосферное давление на какой-либо высоте $h$ обусловлено весом слоёв газа, лежащих выше. Пусть $P$ - давление на высоте $h$, а $P + dP$ - на высоте $h + dh$ (рис. 3.2.3).Разность давления $P - (P + dP)$ равна весу газа, заключённого в объёме цилиндра с площадью основания, равной единице, и высотой $dh$.Так как $P = ρgh$, где $ρ = PM/RT$ - плотность газа на высоте $h$, медленно убывающая с высотой, то можно записать: $P - (P + dP) = ρgdh$ .Отсюда можно получить **барометрическую формулу**, показывающую зависимость атмосферного давления от высоты:$$P=P_0\exp(-\frac{Mgh}{RT})$$Из барометрической формулы следует, что давление убывает с высотой тем быстрее, чем тяжелее газ (чем больше $M$)и чем ниже температура. Например, на больших высотах концентрация легких газов Не и Н2 гораздо больше, чем у поверхности Земли (рис. 3.2.4). Распределение БольцманаИсходя из основного уравнения молекулярно-кинетической теории $P = nkT$, заменим $P$ и $P_0$ в барометрической формуле на $n$ и $n_0$ и получим *распределение молекул во внешнем потенциальном поле* - **распределение Больцмана**:$n=n_0\exp(-\frac{Mgh}{RT})$, или $n=n_0\exp(-\frac{mgh}{kT}$, где $n_0$ и $n$ - число молекул в единичном объёме на высоте $h = 0$ и $h$.С уменьшением температуры число молекул на высотах, отличных от нуля, убывает. При $Т = 0$ тепловое движение прекращается, все молекулы расположились бы на земной поверхности. При высоких температурах, наоборот, молекулы оказываются распределёнными по высоте почти равномерно, а плотность молекул медленно убывает с высотой. Так как $mgh$ - это потенциальная энергия $Е_п$, то на разных высотах $E_п = mgh$ - различна. Следовательно, уравнение характеризует распределение частиц по значениям потенциальной энергии: $$n =n_0\exp(-{E_п}{kT})$$-**это закон распределения частиц по потенциальным энергиям - распределение Больцмана**. Распределение Максвелла-БольцманаИтак, закон Максвелла даёт распределение частиц по значениям кинетической энергии, а закон Больцмана - распределение частиц по значениям потенциальной энергии. Учитывая, что полная энергия $E = Е_п + Е_к$, оба распределения можно объединить в единый **закон Максвелла-Больцмана**:$$dn=n_0A\exp(-\frac{E}{kT})$$ Задание: Реализовать модель поведения идеального газа в замкнутом пространстве, при заданных температуре, массе, количестве частиц.
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation
from scipy.stats import maxwell
# %matplotlib tk
# %matplotlib notebook
# from IPython.display import HTML
# plt.rcParams["animation.html"] = "jshtml"
%matplotlib widget
mw = maxwell()
k = 1.38e-23
R = 8.31
N = 10
T = 5000
m = 6.645e-27
dt = 10e-5
v = np.sqrt(mw.rvs(size=N) * 2 * k * T / m)
alpha = np.random.uniform(0, 2 * np.pi, N)
vx = v * np.cos(alpha)
vy = v * np.sin(alpha)
x = np.random.uniform(0, 10, N)
y = np.random.uniform(0, 10, N)
def ani_func(i):
global x, y, vx, vy, dt
eps = 0.01
plt.clf()
x += vx * dt
y += vy * dt
vx[x + eps >= 10] = -vx[x + eps >= 10]
vx[x - eps <= 0] = -vx[x - eps <= 0]
vy[y + eps >= 10] = -vy[y + eps >= 10]
vy[y - eps <= 0] = -vy[y - eps <= 0]
plt.scatter(x, y)
plt.xlim(0, 10)
plt.ylim(0, 10)
plt.show()
fig = plt.figure(figsize=(5, 5))
skip = 1
ani = animation.FuncAnimation(fig, ani_func, frames=1000, repeat=False, interval=1)
ani.event_source.stop()
###Output
_____no_output_____
###Markdown
Задание: Реализовать модель смеси двух идеальных газов в замкнутом пространстве, при заданных температуре, массах, количествах частиц.
###Code
k = 1.38e-23
R = 8.31
N1 = 10
N2 = 10
T1 = 1000
T2 = 300
m1 = 6.645e-27
m2 = 14.325e-27
dt = 10e-5
v1 = np.sqrt(mw.rvs(size=N1) * 2 * k * T1 / m1)
alpha = np.random.uniform(0, 2 * np.pi, N1)
vx1 = v1 * np.cos(alpha)
vy1 = v1 * np.sin(alpha)
v2 = np.sqrt(mw.rvs(size=N2) * 2 * k * T2 / m2)
alpha = np.random.uniform(0, 2 * np.pi, N2)
vx2 = v2 * np.cos(alpha)
vy2 = v2 * np.sin(alpha)
x1 = np.random.uniform(0, 5, N1)
y1 = np.random.uniform(0, 10, N1)
x2 = np.random.uniform(5, 10, N2)
y2 = np.random.uniform(0, 10, N2)
def ani_func_2(i):
global x1, y1, x2, y2, vx1, vy1, vx2, vy2, dt
eps = 0.01
plt.clf()
x1 += vx1 * dt
y1 += vy1 * dt
x2 += vx2 * dt
y2 += vy2 * dt
vx1[x1 + eps >= 10] = -vx1[x1 + eps >= 10]
vx1[x1 - eps <= 0] = -vx1[x1 - eps <= 0]
vy1[y1 + eps >= 10] = -vy1[y1 + eps >= 10]
vy1[y1 - eps <= 0] = -vy1[y1 - eps <= 0]
vx2[x2 + eps >= 10] = -vx2[x2 + eps >= 10]
vx2[x2 - eps <= 0] = -vx2[x2 - eps <= 0]
vy2[y2 + eps >= 10] = -vy2[y2 + eps >= 10]
vy2[y2 - eps <= 0] = -vy2[y2 - eps <= 0]
plt.scatter(x1, y1)
plt.scatter(x2, y2)
plt.xlim(0, 10)
plt.ylim(0, 10)
plt.show()
fig = plt.figure(figsize=(5, 5))
skip = 1
ani = animation.FuncAnimation(fig, ani_func_2, frames=1000, repeat=False, interval=1)
ani.event_source.stop()
# ani.save("figure_2.gif")
###Output
_____no_output_____ |
DevelopmentNotebooks/win_pyvisa-Copy2.ipynb | ###Markdown
Windows 10, py-visaTesting on more platforms.
###Code
import mhs5200
signal_gen = mhs5200.MHS5200("COM4")
import pyvisa
rm = pyvisa.ResourceManager()
rm.list_resources()
scope = rm.open_resource('USB0::0x1AB1::0x0588::DS1EU152500705::INSTR')
for channel in [1, 2]:
for setting in ["BWLIMIT", "COUPLING", "DISPLAY", "INVERT", "OFFSET", "PROBE", "SCALE", "FILTER", "MEMORYDEPTH", "VERNIER"]:
try:
result = scope.query(f":CHANNEL{channel}:{setting}?")
print(f"{channel}:{setting}:{result}")
except:
print(f"FAILED: {channel}:{setting}")
import time
def test_frequency_amplitude(frequency, amplitude, signal_gen, scope):
for chan in signal_gen.channels:
chan.frequency=frequency
chan.amplitude=amplitude
chan.phase=0
period = 1/float(frequency)
timescale="{:.20f}".format(float(period/5))
# Configure scope
scope.write(f":MEASURE:TOTAL ON")
scope.write(f":TIMebase:SCALE {timescale}")
for scope_channel in [1, 2]:
scope.write(f":CHANNEL{scope_channel}:probe 1")
scope.write(f":CHANNEL{scope_channel}:scale {amplitude/5}")
scope.write(f":CHANNEL{scope_channel}:offset 0")
# Configure signal generator
for chan in signal_gen.channels:
chan.frequency=frequency
chan.amplitude=amplitude
chan.offset = 0
chan.phase=0
for source in ["CHAN1", "CHAN2"]:
scope.write(f":MEASURE:SOURCE {source}")
time.sleep(1)
for param in ["FREQUENCY", "VPP", "VMIN", "VMAX", "VAMPLITUDE"]:
measured = scope.query_ascii_values(f":MEASURE:{param}?")[0]
print(f"{source}:{param}:{measured}")
test_frequency_amplitude(100, 10, signal_gen=signal_gen, scope=scope)
import numpy as np
np.log10(50e6)
for frequency in np.logspace(np.log10(100), np.log10(1000000), 2):
for amplitude in [20]:
test_frequency_amplitude(frequency, amplitude, signal_gen=signal_gen, scope=scope)
import pandas as pd
df = pandas.DataFrame()
import uuid
def test_frequency_amplitude2(frequency, amplitude, signal_gen, scope):
for chan in signal_gen.channels:
chan.frequency=frequency
chan.amplitude=amplitude
chan.phase=0
period = 1/float(frequency)
timescale="{:.20f}".format(float(period/5))
# Configure scope
scope.write(f":MEASURE:TOTAL ON")
scope.write(f":TIMebase:SCALE {timescale}")
for scope_channel in [1, 2]:
scope.write(f":CHANNEL{scope_channel}:probe 1")
scope.write(f":CHANNEL{scope_channel}:scale {amplitude/5}")
scope.write(f":CHANNEL{scope_channel}:offset 0")
# Configure signal generator
for chan in signal_gen.channels:
chan.frequency=frequency
chan.amplitude=amplitude
chan.offset = 0
chan.phase=0
df = dict()
df["uuid"] = str(uuid.uuid4())
df["frequency"] = frequency
df["amplitude"] = amplitude
for source in ["CHAN1", "CHAN2"]:
scope.write(f":MEASURE:SOURCE {source}")
time.sleep(1)
for param in ["FREQUENCY", "VPP", "VMIN", "VMAX", "VAMPLITUDE"]:
measured = scope.query_ascii_values(f":MEASURE:{param}?")[0]
df[f"{source}_{param}"] = measured
return pandas.DataFrame(df, index=[0])
df = df.append(test_frequency_amplitude2(100, 10, signal_gen, scope))
df = pd.DataFrame()
for frequency in np.logspace(np.log10(100), np.log10(1000000), 10):
for amplitude in [1, 5, 10, 20]:
result_df = test_frequency_amplitude2(frequency, amplitude, signal_gen=signal_gen, scope=scope)
df = df.append(result_df)
df.hist("frequency", bins=10)
def test_frequency_amplitude3(frequency, amplitude, signal_gen, scope):
for chan in signal_gen.channels:
chan.frequency=frequency
chan.amplitude=amplitude
chan.phase=0
period = 1/float(frequency)
timescale="{:.20f}".format(float(period/5))
# Configure scope
scope.write(f":MEASURE:TOTAL ON")
scope.write(f":TIMebase:SCALE {timescale}")
for scope_channel in [1, 2]:
scope.write(f":CHANNEL{scope_channel}:probe 1")
scope.write(f":CHANNEL{scope_channel}:scale {amplitude/5}")
scope.write(f":CHANNEL{scope_channel}:offset 0")
# Configure signal generator
for chan in signal_gen.channels:
chan.frequency=frequency
chan.amplitude=amplitude
chan.offset = 0
chan.phase=0
df = dict()
df["uuid"] = str(uuid.uuid4())
df["frequency"] = frequency
df["amplitude"] = amplitude
for source in ["CHAN1", "CHAN2"]:
scope.write(f":MEASURE:SOURCE {source}")
time.sleep(1)
for param in ['VPP',
'VMAX',
'VMIN',
'VAMPlitude',
'VTOP',
'VBASe',
'VAVerage',
'VRMS',
'OVERshoot',
'PREShoot',
'FREQuency',
'RISetime',
'FALLtime',
'PERiod',
'PWIDth',
'NWIDth',
'PDUTycycle',
'NDUTycycle',
'PDELay',
'NDELay',
'TOTal',
'SOURce',]:
try:
measured = scope.query_ascii_values(f":MEASURE:{param}?")[0]
except:
measured = scope.query(f":MEASURE:{param}?")[0]
df[f"{source}_{param}"] = measured
return pandas.DataFrame(df, index=[0])
df = pd.DataFrame()
for frequency in np.logspace(np.log10(100), np.log10(100000000), 20):
for amplitude in [1, 5, 10, 20]:
result_df = test_frequency_amplitude2(frequency, amplitude, signal_gen=signal_gen, scope=scope)
df = df.append(result_df)
import seaborn as sns
sns.set(
rc={
"figure.figsize": (11, 8.5),
"figure.dpi": 300,
"figure.facecolor": "w",
"figure.edgecolor": "k",
}
)
palette = (sns.color_palette("Paired"))
sns.palplot(palette)
sns.set_palette(palette)
df.groupby(["frequency", "amplitude"]).agg()
data = scope.query_binary_values(":WAVEFORM:DATA? CHAN1")
plt.plot(data)
data = scope.query_binary_values(":WAVEFORM:DATA? CHAN2")
plt.plot(data)
scope.query(":ACQ:SAMP? CHANnel2")
scope.query(":ACQ:MEMD?")
scope.write(":ACQ:MEMD LONG")
for depth in ["NORMAL", "LONG"]:
scope.write(f":ACQ:MEMD {depth}")
time.sleep(0.5)
assert depth == scope.query(":ACQ:MEMD?")
import matplotlib.pyplot as plt
data = scope.query_binary_values(":WAVEFORM:DATA? CHAN1", "B")
plt.plot(data)
data = scope.query_binary_values(":WAVEFORM:DATA? CHAN2", "B")
plt.plot(data)
scope.q
?scope.query_binary_values
scope.query(":WAVEFORM:POINTS:MODE?")
scope.write(":WAVEFORM:DATA? CHANNEL1")
header = scope.read_raw()[:10]
header
scope.write(":WAVEFORM:DATA? CHANNEL1")
data = scope.read_raw()[10:]
data[0]
data[0:1]
data[0:2]
import numpy as np
np.array(56).tobytes()
np.array(56).tobytes("C")
np.array(56).tobytes("F")
np.array(56.0).tobytes("F")
np.frombuffer(np.array(56).tobytes("F"))
dt = np.dtype(float)
dt = dt.newbyteorder(">")
plt.plot(np.frombuffer(data))
np.frombuffer(b'\x01\x02', dtype=np.uint8)
np.frombuffer(b'\x01\x02\x03\x04\x05', dtype=np.uint8, count=3)
dt = np.dtype(float)
dt = dt.newbyteorder("<")
plt.plot(np.frombuffer(data))
###Output
_____no_output_____ |
docs/source/user_guide/utilities.ipynb | ###Markdown
Utilities Configuring LoggingEvalML uses [the standard python logging package](https://docs.python.org/3/library/logging.html). By default, EvalML will log `INFO`-level logs and higher (warnings, errors and critical) to stdout, and will log everything to `evalml_debug.log` in the current working directory.If you want to change the location of the logfile, before import, set the `EVALML_LOG_FILE` environment variable to specify a filename within an existing directory in which you have write permission. If you want to disable logging to the logfile, set `EVALML_LOG_FILE` to be empty. If the environment variable is set to an invalid location, EvalML will print a warning message to stdout and will not create a log file. System InformationEvalML provides a command-line interface (CLI) tool prints the version of EvalML and core dependencies installed, as well as some basic system information. To use this tool, just run `evalml info` in your shell or terminal. This could be useful for debugging purposes or tracking down any version-related issues.
###Code
!evalml info
###Output
_____no_output_____
###Markdown
Utilities Configuring LoggingEvalML uses [the standard Python logging package](https://docs.python.org/3/library/logging.html). Default logging behavior prints WARNING level logs and above (ERROR and CRITICAL) to stdout. To configure different behavior, please refer to the Python logging documentation.To see up-to-date feedback as `AutoMLSearch` runs, use the argument `verbose=True` when instantiating the object. This will temporarily set up a logging object to print INFO level logs and above to stdout, as well as display a graph of the best score over pipeline iterations. System InformationEvalML provides a command-line interface (CLI) tool prints the version of EvalML and core dependencies installed, as well as some basic system information. To use this tool, just run `evalml info` in your shell or terminal. This could be useful for debugging purposes or tracking down any version-related issues.
###Code
!evalml info
###Output
_____no_output_____
###Markdown
Utilities Configuring Loggingrayml uses [the standard Python logging package](https://docs.python.org/3/library/logging.html). Default logging behavior prints WARNING level logs and above (ERROR and CRITICAL) to stdout. To configure different behavior, please refer to the Python logging documentation.To see up-to-date feedback as `AutoMLSearch` runs, use the argument `verbose=True` when instantiating the object. This will temporarily set up a logging object to print INFO level logs and above to stdout, as well as display a graph of the best score over pipeline iterations. System Informationrayml provides a command-line interface (CLI) tool prints the version of rayml and core dependencies installed, as well as some basic system information. To use this tool, just run `rayml info` in your shell or terminal. This could be useful for debugging purposes or tracking down any version-related issues.
###Code
!rayml info
###Output
_____no_output_____ |
courses/machine_learning/deepdive/supplemental_gradient_boosting/labs/b_boosted_trees_estimator.ipynb | ###Markdown
Introduction In this notebook, we will - Learn how to use BoostedTrees Classifier for training and evaluating- Explore how training can be speeded up for small datasets- Will develop intuition for how some of the hyperparameters affect the performance of boosted trees.
###Code
# We will use some np and pandas for dealing with input data.
import numpy as np
import pandas as pd
# And of course, we need tensorflow.
import tensorflow as tf
from distutils.version import StrictVersion
tf.__version__
###Output
_____no_output_____
###Markdown
Load datasetWe will be using the titanic dataset, where the goal is to predict passenger survival given characteristiscs such as gender, age, class, etc.
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tf.set_random_seed(123)
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
fcol = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return fcol.indicator_column(
fcol.categorical_column_with_vocabulary_list(feature_name,
vocab))
fc = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
fc.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
fc.append(fcol.numeric_column(feature_name,
dtype=tf.float32))
# Prepare the input fn. Use the entire dataset for a batch since this is such a small dataset.
def make_input_fn(X, y, n_epochs=None, do_batching=True):
def input_fn():
BATCH_SIZE = len(y) # Use entire dataset.
dataset = tf.data.Dataset.from_tensor_slices((X.to_dict(orient='list'), y))
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = dataset.repeat(n_epochs)
if do_batching:
dataset = dataset.batch(BATCH_SIZE)
return dataset
return input_fn
###Output
_____no_output_____
###Markdown
Training and Evaluating Classifiers
###Code
TRAIN_SIZE = len(dftrain)
params = {
'n_trees':10,
'center_bias':False,
'l2_regularization':1./TRAIN_SIZE # regularization is per instance, so if you are familiar with XGBoost, you need to divide these values by the num of examples per layer
}
###Output
_____no_output_____
###Markdown
Exercise: Train a Boosted Trees model using tf.estimator. What are the best results you can get? Train and evaluate the model. We will look at accuracy first.
###Code
# Training and evaluation input functions.
n_batches_per_layer = 1 # Use one batch, consisting of the entire dataset to build each layer in the tree.
DO_BATCHING = True
train_input_fn = make_input_fn(dftrain, y_train, n_epochs=None, do_batching=DO_BATCHING)
eval_input_fn = make_input_fn(dfeval, y_eval, n_epochs=1, do_batching=DO_BATCHING)
est = # TODO
est.train(train_input_fn)
# Eval.
pd.Series(est.evaluate(eval_input_fn))
###Output
_____no_output_____
###Markdown
Exercise 2: Can you get better performance out of the classifier? How do the results compare to using a DNN? Accuracy and AUC? Results Let's understand how our model is performing.
###Code
pred_dicts = list(est.predict(eval_input_fn))
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
probs.plot(kind='hist', bins=20, title='predicted probabilities');
###Output
_____no_output_____
###Markdown
**???** Why are the probabilities right skewed? Let's plot an ROC curve to understand model performance for various predicition probabilities.
###Code
from sklearn.metrics import roc_curve
from matplotlib import pyplot as plt
fpr, tpr, _ = roc_curve(y_eval, probs)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0,)
plt.ylim(0,);
###Output
_____no_output_____
###Markdown
Introduction In this notebook, we will - Learn how to use BoostedTrees Classifier for training and evaluating- Explore how training can be speeded up for small datasets- Will develop intuition for how some of the hyperparameters affect the performance of boosted trees.
###Code
# We will use some np and pandas for dealing with input data.
import numpy as np
import pandas as pd
# And of course, we need tensorflow
import tensorflow as tf
from distutils.version import StrictVersion
tf.__version__
###Output
_____no_output_____
###Markdown
Load datasetWe will be using the titanic dataset, where the goal is to predict passenger survival given characteristiscs such as gender, age, class, etc.
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tf.set_random_seed(123)
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
dftrain.head()
dftrain['age'].hist()
dftrain['embark_town'].value_counts()
fcol = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return fcol.indicator_column(
fcol.categorical_column_with_vocabulary_list(feature_name,
vocab))
fc = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
fc.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
fc.append(fcol.numeric_column(feature_name,
dtype=tf.float32))
# Prepare the input fn. Use the entire dataset for a batch since this is such a small dataset.
def make_input_fn(X, y, n_epochs=None, do_batching=True):
def input_fn():
BATCH_SIZE = len(y) # Use entire dataset.
dataset = tf.data.Dataset.from_tensor_slices((X.to_dict(orient='list'), y))
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = dataset.repeat(n_epochs)
if do_batching:
dataset = dataset.batch(BATCH_SIZE)
return dataset
return input_fn
###Output
_____no_output_____
###Markdown
Training and Evaluating Classifiers Exercise: Train a Boosted Trees model using tf.estimator. What are the best results you can get? Train and evaluate the model. We will look at accuracy first.
###Code
TRAIN_SIZE = len(dftrain)
params = {
'n_trees':10,
'center_bias':False,
'l2_regularization':1./TRAIN_SIZE # regularization is per instance, so if you are familiar with XGBoost, you need to divide these values by the num of examples per layer
}
# Training and evaluation input functions.
n_batches_per_layer = 1 # Use one batch, consisting of the entire dataset to build each layer in the tree.
DO_BATCHING = True
train_input_fn = make_input_fn(dftrain, y_train, n_epochs=None, do_batching=DO_BATCHING)
eval_input_fn = make_input_fn(dfeval, y_eval, n_epochs=1, do_batching=DO_BATCHING)
est = tf.estimator.BoostedTreesClassifier(fc, n_batches_per_layer, **params)
est.train(train_input_fn)
# Eval.
pd.Series(est.evaluate(eval_input_fn))
###Output
WARNING: Logging before flag parsing goes to stderr.
I0722 14:31:35.565665 139996009612736 estimator.py:1790] Using default config.
W0722 14:31:35.569427 139996009612736 estimator.py:1811] Using temporary folder as model directory: /tmp/tmpp04B2v
I0722 14:31:35.573219 139996009612736 estimator.py:209] Using config: {'_save_checkpoints_secs': 600, '_num_ps_replicas': 0, '_keep_checkpoint_max': 5, '_task_type': 'worker', '_global_id_in_cluster': 0, '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f5300ba1f90>, '_model_dir': '/tmp/tmpp04B2v', '_protocol': None, '_save_checkpoints_steps': None, '_keep_checkpoint_every_n_hours': 10000, '_service': None, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_tf_random_seed': None, '_save_summary_steps': 100, '_device_fn': None, '_experimental_distribute': None, '_num_worker_replicas': 1, '_task_id': 0, '_log_step_count_steps': 100, '_experimental_max_worker_delay_secs': None, '_evaluation_master': '', '_eval_distribute': None, '_train_distribute': None, '_master': ''}
W0722 14:31:35.575416 139996009612736 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/canned/boosted_trees.py:297: _num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
W0722 14:31:35.735229 139996009612736 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/training/training_util.py:236: initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
I0722 14:31:35.799928 139996009612736 estimator.py:1145] Calling model_fn.
W0722 14:31:35.815185 139996009612736 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/feature_column/feature_column.py:2115: _transform_feature (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
W0722 14:31:35.819067 139996009612736 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/feature_column/feature_column.py:2115: _transform_feature (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
W0722 14:31:35.820725 139996009612736 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/feature_column/feature_column_v2.py:4236: _get_sparse_tensors (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
W0722 14:31:35.822936 139996009612736 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/feature_column/feature_column.py:2115: _transform_feature (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
W0722 14:31:35.827809 139996009612736 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/feature_column/feature_column_v2.py:2655: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
W0722 14:31:35.839543 139996009612736 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/feature_column/feature_column_v2.py:4207: _variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
W0722 14:31:35.951092 139996009612736 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/canned/boosted_trees.py:157: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W0722 14:31:35.994098 139996009612736 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/canned/head.py:437: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
I0722 14:31:36.278825 139996009612736 estimator.py:1147] Done calling model_fn.
I0722 14:31:36.280870 139996009612736 basic_session_run_hooks.py:541] Create CheckpointSaverHook.
W0722 14:31:36.372642 139996009612736 meta_graph.py:449] Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
I0722 14:31:36.515558 139996009612736 monitored_session.py:240] Graph was finalized.
I0722 14:31:36.617985 139996009612736 session_manager.py:500] Running local_init_op.
I0722 14:31:36.649574 139996009612736 session_manager.py:502] Done running local_init_op.
W0722 14:31:37.024797 139996009612736 meta_graph.py:449] Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
I0722 14:31:37.106653 139996009612736 basic_session_run_hooks.py:606] Saving checkpoints for 0 into /tmp/tmpp04B2v/model.ckpt.
W0722 14:31:37.207247 139996009612736 meta_graph.py:449] Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
I0722 14:31:38.080131 139996009612736 basic_session_run_hooks.py:262] loss = 0.6931468, step = 0
W0722 14:31:38.771338 139996009612736 basic_session_run_hooks.py:724] It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
I0722 14:31:39.974495 139996009612736 basic_session_run_hooks.py:606] Saving checkpoints for 60 into /tmp/tmpp04B2v/model.ckpt.
W0722 14:31:40.059488 139996009612736 meta_graph.py:449] Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
I0722 14:31:40.125475 139996009612736 estimator.py:368] Loss for final step: 0.30194622.
I0722 14:31:40.183000 139996009612736 estimator.py:1145] Calling model_fn.
W0722 14:31:40.707596 139996009612736 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/metrics_impl.py:2027: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
W0722 14:31:41.083235 139996009612736 metrics_impl.py:804] Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to "careful_interpolation" instead.
W0722 14:31:41.105353 139996009612736 metrics_impl.py:804] Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to "careful_interpolation" instead.
I0722 14:31:41.127335 139996009612736 estimator.py:1147] Done calling model_fn.
I0722 14:31:41.148010 139996009612736 evaluation.py:255] Starting evaluation at 2019-07-22T14:31:41Z
I0722 14:31:41.260328 139996009612736 monitored_session.py:240] Graph was finalized.
W0722 14:31:41.262229 139996009612736 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
I0722 14:31:41.264652 139996009612736 saver.py:1280] Restoring parameters from /tmp/tmpp04B2v/model.ckpt-60
I0722 14:31:41.362113 139996009612736 session_manager.py:500] Running local_init_op.
I0722 14:31:41.446366 139996009612736 session_manager.py:502] Done running local_init_op.
I0722 14:31:42.565407 139996009612736 evaluation.py:275] Finished evaluation at 2019-07-22-14:31:42
I0722 14:31:42.567086 139996009612736 estimator.py:2039] Saving dict for global step 60: accuracy = 0.8068182, accuracy_baseline = 0.625, auc = 0.8663299, auc_precision_recall = 0.85031575, average_loss = 0.41991314, global_step = 60, label/mean = 0.375, loss = 0.41991314, precision = 0.75, prediction/mean = 0.3852217, recall = 0.72727275
W0722 14:31:42.734955 139996009612736 meta_graph.py:449] Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
I0722 14:31:42.781467 139996009612736 estimator.py:2099] Saving 'checkpoint_path' summary for global step 60: /tmp/tmpp04B2v/model.ckpt-60
###Markdown
Base model test data:accuracy 0.806818accuracy_baseline 0.625000auc 0.866330auc_precision_recall 0.850316average_loss 0.419913global_step 60.000000label/mean 0.375000loss 0.419913precision 0.750000prediction/mean 0.385222recall 0.727273 Base model train data: accuracy 0.886762accuracy_baseline 0.612440auc 0.946545auc_precision_recall 0.934759average_loss 0.300738global_step 60.000000label/mean 0.387560loss 0.300738precision 0.887387prediction/mean 0.387528recall 0.810700dtype: float64
###Code
pd.Series(est.evaluate(make_input_fn(dftrain, y_train, n_epochs=1, do_batching=DO_BATCHING)))
###Output
I0722 14:31:42.856827 139996009612736 estimator.py:1145] Calling model_fn.
W0722 14:31:43.761646 139996009612736 metrics_impl.py:804] Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to "careful_interpolation" instead.
W0722 14:31:43.783691 139996009612736 metrics_impl.py:804] Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to "careful_interpolation" instead.
I0722 14:31:43.805263 139996009612736 estimator.py:1147] Done calling model_fn.
I0722 14:31:43.825486 139996009612736 evaluation.py:255] Starting evaluation at 2019-07-22T14:31:43Z
I0722 14:31:43.935919 139996009612736 monitored_session.py:240] Graph was finalized.
I0722 14:31:43.938720 139996009612736 saver.py:1280] Restoring parameters from /tmp/tmpp04B2v/model.ckpt-60
I0722 14:31:44.034354 139996009612736 session_manager.py:500] Running local_init_op.
I0722 14:31:44.123128 139996009612736 session_manager.py:502] Done running local_init_op.
I0722 14:31:45.216234 139996009612736 evaluation.py:275] Finished evaluation at 2019-07-22-14:31:45
I0722 14:31:45.218225 139996009612736 estimator.py:2039] Saving dict for global step 60: accuracy = 0.8867624, accuracy_baseline = 0.6124402, auc = 0.94654495, auc_precision_recall = 0.9347591, average_loss = 0.30073795, global_step = 60, label/mean = 0.3875598, loss = 0.30073795, precision = 0.8873874, prediction/mean = 0.38752845, recall = 0.8106996
I0722 14:31:45.225095 139996009612736 estimator.py:2099] Saving 'checkpoint_path' summary for global step 60: /tmp/tmpp04B2v/model.ckpt-60
###Markdown
Exercise 2: Can you get better performance out of the classifier? How do the results compare to using a DNN? Accuracy and AUC? Results Let's understand how our model is performing.
###Code
pred_dicts = list(est.predict(eval_input_fn))
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
y_preds = pd.Series([pred['class_ids'][0] for pred in pred_dicts])
probs.plot(kind='hist', bins=20, title='predicted probabilities');
###Output
I0722 14:31:45.312201 139996009612736 estimator.py:1145] Calling model_fn.
I0722 14:31:45.696373 139996009612736 estimator.py:1147] Done calling model_fn.
I0722 14:31:45.792782 139996009612736 monitored_session.py:240] Graph was finalized.
I0722 14:31:45.796041 139996009612736 saver.py:1280] Restoring parameters from /tmp/tmpp04B2v/model.ckpt-60
I0722 14:31:45.849188 139996009612736 session_manager.py:500] Running local_init_op.
I0722 14:31:45.868169 139996009612736 session_manager.py:502] Done running local_init_op.
###Markdown
**???** Why are the probabilities right skewed?
###Code
y_train.value_counts()
###Output
_____no_output_____
###Markdown
Let's plot an ROC curve to understand model performance for various predicition probabilities.
###Code
from sklearn.metrics import confusion_matrix, roc_curve
from matplotlib import pyplot as plt
fpr, tpr, _ = roc_curve(y_eval, probs)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0,)
plt.ylim(0,);
###Output
_____no_output_____
###Markdown
**???** What does true positive rate and false positive rate refer to for this dataset?
###Code
confusion_matrix(y_eval, y_preds)
###Output
_____no_output_____
###Markdown
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
###Code
TRAIN_SIZE = len(dftrain)
params = {
'n_trees':20,
'center_bias':False,
'max_depth' : 2,
'l2_regularization':1./TRAIN_SIZE # regularization is per instance, so if you are familiar with XGBoost, you need to divide these values by the num of examples per layer
}
# Training and evaluation input functions.
n_batches_per_layer = 1 # Use one batch, consisting of the entire dataset to build each layer in the tree.
DO_BATCHING = True
train_input_fn = make_input_fn(dftrain, y_train, n_epochs=None, do_batching=DO_BATCHING)
eval_input_fn = make_input_fn(dfeval, y_eval, n_epochs=1, do_batching=DO_BATCHING)
est = tf.estimator.BoostedTreesClassifier(fc, n_batches_per_layer, **params)
est.train(train_input_fn)
# Eval.
pd.Series(est.evaluate(eval_input_fn))
TRAIN_SIZE = len(dftrain)
params = {
'n_trees':100,
'center_bias':False,
'max_depth' : 2,
'l2_regularization':1./TRAIN_SIZE # regularization is per instance, so if you are familiar with XGBoost, you need to divide these values by the num of examples per layer
}
# Training and evaluation input functions.
n_batches_per_layer = 1 # Use one batch, consisting of the entire dataset to build each layer in the tree.
DO_BATCHING = True
train_input_fn = make_input_fn(dftrain, y_train, n_epochs=None, do_batching=DO_BATCHING)
eval_input_fn = make_input_fn(dfeval, y_eval, n_epochs=1, do_batching=DO_BATCHING)
est = tf.estimator.BoostedTreesClassifier(fc, n_batches_per_layer, **params)
est.train(train_input_fn)
# Eval.
eval_results = pd.Series(est.evaluate(eval_input_fn))
train_results = pd.Series(est.evaluate(make_input_fn(dftrain, y_train, n_epochs=1, do_batching=DO_BATCHING)))
pd.DataFrame({'Train': train_results, 'Eval': eval_results})
pred_dicts = list(est.predict(eval_input_fn))
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
y_preds = pd.Series([pred['class_ids'][0] for pred in pred_dicts])
probs.plot(kind='hist', bins=20, title='predicted probabilities');
fpr, tpr, _ = roc_curve(y_eval, probs)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0,)
plt.ylim(0,);
TRAIN_SIZE = len(dftrain)
params = {
'n_trees':100,
'max_depth' : 4,
'l2_regularization':1./TRAIN_SIZE, # regularization is per instance, so if you are familiar with XGBoost, you need to divide these values by the num of examples per layer
'center_bias':False
}
# Training and evaluation input functions.
n_batches_per_layer = 1 # Use one batch, consisting of the entire dataset to build each layer in the tree.
DO_BATCHING = True
train_input_fn = make_input_fn(dftrain, y_train, n_epochs=None, do_batching=DO_BATCHING)
eval_input_fn = make_input_fn(dfeval, y_eval, n_epochs=1, do_batching=DO_BATCHING)
est = tf.estimator.BoostedTreesClassifier(fc, n_batches_per_layer, **params)
est.train(train_input_fn)
# Eval.
eval_results = pd.Series(est.evaluate(eval_input_fn))
train_results = pd.Series(est.evaluate(make_input_fn(dftrain, y_train, n_epochs=1, do_batching=DO_BATCHING)))
pd.DataFrame({'Train': train_results, 'Eval': eval_results})
est.experimental_feature_importances(normalize=True)
merged = pd.concat([dftrain, y_train], axis=1)
merged.groupby('sex').survived.mean().plot(kind='barh')
merged['fare'].corr(merged['survived'])
merged['age'].corr(merged['survived'])
df_survived = merged[merged['survived'] == 1]
df_died = merged[merged['survived'] == 0]
bins = [0,10,20,30,40,50,60,70,80]
df_survived['age'].hist(alpha=0.5, color='green', bins=bins, normed=True)
df_died['age'].hist(alpha=0.5, color='red', bins=bins, normed=True)
plt.show()
df_survived['fare'].hist(alpha=0.5, color='green', normed=True)
df_died['fare'].hist(alpha=0.5, color='red', normed=True)
plt.show()
pd.Series(est.evaluate(make_input_fn(dftrain, y_train, n_epochs=1, do_batching=DO_BATCHING)))
###Output
I0722 14:41:29.885025 139996009612736 estimator.py:1145] Calling model_fn.
W0722 14:41:30.820516 139996009612736 metrics_impl.py:804] Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to "careful_interpolation" instead.
W0722 14:41:30.841654 139996009612736 metrics_impl.py:804] Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to "careful_interpolation" instead.
I0722 14:41:30.864413 139996009612736 estimator.py:1147] Done calling model_fn.
I0722 14:41:30.885152 139996009612736 evaluation.py:255] Starting evaluation at 2019-07-22T14:41:30Z
I0722 14:41:30.994900 139996009612736 monitored_session.py:240] Graph was finalized.
I0722 14:41:30.998346 139996009612736 saver.py:1280] Restoring parameters from /tmp/tmpi7fyO5/model.ckpt-400
I0722 14:41:31.093931 139996009612736 session_manager.py:500] Running local_init_op.
I0722 14:41:31.177222 139996009612736 session_manager.py:502] Done running local_init_op.
I0722 14:41:32.410644 139996009612736 evaluation.py:275] Finished evaluation at 2019-07-22-14:41:32
I0722 14:41:32.412698 139996009612736 estimator.py:2039] Saving dict for global step 400: accuracy = 0.9298246, accuracy_baseline = 0.6124402, auc = 0.9778217, auc_precision_recall = 0.9711784, average_loss = 0.21227255, global_step = 400, label/mean = 0.3875598, loss = 0.21227255, precision = 0.93449783, prediction/mean = 0.3870696, recall = 0.88065845
I0722 14:41:32.421211 139996009612736 estimator.py:2099] Saving 'checkpoint_path' summary for global step 400: /tmp/tmpi7fyO5/model.ckpt-400
###Markdown
Compare performance to a DNN
###Code
TRAIN_SIZE = len(dftrain)
# Training and evaluation input functions.
n_batches_per_layer = 1 # Use one batch, consisting of the entire dataset to build each layer in the tree.
DO_BATCHING = True
train_input_fn = make_input_fn(dftrain, y_train, n_epochs=None, do_batching=DO_BATCHING)
eval_input_fn = make_input_fn(dfeval, y_eval, n_epochs=1, do_batching=DO_BATCHING)
est = tf.estimator.DNNClassifier(feature_columns = fc, hidden_units = [10, 10])
est.train(train_input_fn, max_steps=1000)
# Eval.
pd.Series(est.evaluate(eval_input_fn))
TRAIN_SIZE = len(dftrain)
params = {
'n_trees':20,
'center_bias':False,
'max_depth' : 6,
'l2_regularization':1./TRAIN_SIZE # regularization is per instance, so if you are familiar with XGBoost, you need to divide these values by the num of examples per layer
}
# Training and evaluation input functions.
n_batches_per_layer = 1 # Use one batch, consisting of the entire dataset to build each layer in the tree.
DO_BATCHING = True
train_input_fn = make_input_fn(dftrain, y_train, n_epochs=None, do_batching=DO_BATCHING)
eval_input_fn = make_input_fn(dfeval, y_eval, n_epochs=1, do_batching=DO_BATCHING)
est = tf.estimator.BoostedTreesClassifier(fc, n_batches_per_layer, **params)
est.train(train_input_fn)
# Eval.
eval_results = pd.Series(est.evaluate(eval_input_fn))
train_results = pd.Series(est.evaluate(make_input_fn(dftrain, y_train, n_epochs=1, do_batching=DO_BATCHING)))
pd.DataFrame({'Train': train_results, 'Eval': eval_results})
TRAIN_SIZE = len(dftrain)
params = {
'n_trees':50,
'center_bias':False,
'max_depth' : 6,
'l2_regularization':1./TRAIN_SIZE # regularization is per instance, so if you are familiar with XGBoost, you need to divide these values by the num of examples per layer
}
# Training and evaluation input functions.
n_batches_per_layer = 1 # Use one batch, consisting of the entire dataset to build each layer in the tree.
DO_BATCHING = True
train_input_fn = make_input_fn(dftrain, y_train, n_epochs=None, do_batching=DO_BATCHING)
eval_input_fn = make_input_fn(dfeval, y_eval, n_epochs=1, do_batching=DO_BATCHING)
est = tf.estimator.BoostedTreesClassifier(fc, n_batches_per_layer, **params)
est.train(train_input_fn)
# Eval.
eval_results = pd.Series(est.evaluate(eval_input_fn))
train_results = pd.Series(est.evaluate(make_input_fn(dftrain, y_train, n_epochs=1, do_batching=DO_BATCHING)))
pd.DataFrame({'Train': train_results, 'Eval': eval_results})
import pandas_profiling
pandas_profiling.ProfileReport(dftrain)
pred_dicts = list(est.predict(eval_input_fn))
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
y_preds = pd.Series([pred['class_ids'][0] for pred in pred_dicts])
probs.plot(kind='hist', bins=20, title='predicted probabilities');
fpr, tpr, _ = roc_curve(y_eval, probs)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0,)
plt.ylim(0,);
###Output
_____no_output_____
###Markdown
Introduction In this notebook, we will - Learn how to use BoostedTrees Classifier for training and evaluating- Explore how training can be speeded up for small datasets- Will develop intuition for how some of the hyperparameters affect the performance of boosted trees.
###Code
# We will use some np and pandas for dealing with input data.
import numpy as np
import pandas as pd
# And of course, we need tensorflow.
import tensorflow as tf
from distutils.version import StrictVersion
tf.__version__
###Output
_____no_output_____
###Markdown
Load datasetWe will be using the titanic dataset, where the goal is to predict passenger survival given characteristiscs such as gender, age, class, etc.
###Code
tf.logging.set_verbosity(tf.logging.INFO)
tf.set_random_seed(123)
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
fcol = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return fcol.indicator_column(
fcol.categorical_column_with_vocabulary_list(feature_name,
vocab))
fc = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
fc.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
fc.append(fcol.numeric_column(feature_name,
dtype=tf.float32))
# Prepare the input fn. Use the entire dataset for a batch since this is such a small dataset.
def make_input_fn(X, y, n_epochs=None, do_batching=True):
def input_fn():
BATCH_SIZE = len(y) # Use entire dataset.
dataset = tf.data.Dataset.from_tensor_slices((X.to_dict(orient='list'), y))
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = dataset.repeat(n_epochs)
if do_batching:
dataset = dataset.batch(BATCH_SIZE)
return dataset
return input_fn
###Output
_____no_output_____
###Markdown
Training and Evaluating Classifiers
###Code
TRAIN_SIZE = len(dftrain)
params = {
'n_trees':10,
'center_bias':False,
'l2_regularization':1./TRAIN_SIZE # regularization is per instance, so if you are familiar with XGBoost, you need to divide these values by the num of examples per layer
}
###Output
_____no_output_____
###Markdown
Exercise: Train a Boosted Trees model using tf.estimator. What are the best results you can get? Train and evaluate the model. We will look at accuracy first.
###Code
# Training and evaluation input functions.
n_batches_per_layer = 1 # Use one batch, consisting of the entire dataset to build each layer in the tree.
DO_BATCHING = True
train_input_fn = make_input_fn(dftrain, y_train, n_epochs=None, do_batching=DO_BATCHING)
eval_input_fn = make_input_fn(dfeval, y_eval, n_epochs=1, do_batching=DO_BATCHING)
est = # TODO
est.train(train_input_fn)
# Eval.
pd.Series(est.evaluate(eval_input_fn))
###Output
_____no_output_____
###Markdown
Exercise 2: Can you get better performance out of the classifier? How do the results compare to using a DNN? Accuracy and AUC? Results Let's understand how our model is performing.
###Code
pred_dicts = list(est.predict(eval_input_fn))
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
probs.plot(kind='hist', bins=20, title='predicted probabilities');
###Output
_____no_output_____
###Markdown
**???** Why are the probabilities right skewed? Let's plot an ROC curve to understand model performance for various predicition probabilities.
###Code
from sklearn.metrics import roc_curve
from matplotlib import pyplot as plt
fpr, tpr, _ = roc_curve(y_eval, probs)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0,)
plt.ylim(0,);
###Output
_____no_output_____ |
notebooks/federated_learning/federated_learning_basic_concepts_random_seed.ipynb | ###Markdown
Federated learning: random seedThis notebook is a copy of the notebook [Federated learning basic concepts](./federated_learning_basic_concepts.ipynb). The difference is that, here, we set a seed using [Reproducibility](https://github.com/sherpaai/Sherpa.ai-Federated-Learning-Framework/blob/master/shfl/private/reproducibility.py) Singleton Class, in order to ensure the reproducibility of the experiment. If you execute this experiment many times, you should always obtain the same results. However, apart from that, the structure is identical so the text has been removed for clearness. Please refer to the original notebook for the detailed description of the experiment.
###Code
from shfl.private.reproducibility import Reproducibility
# Server
Reproducibility(1234)
# In case of client
# Reproducibility.get_instance().set_seed(ID)
###Output
_____no_output_____
###Markdown
The data
###Code
import matplotlib.pyplot as plt
import shfl
database = shfl.data_base.Emnist()
train_data, train_labels, test_data, test_labels = database.load_data()
print(len(train_data))
print(len(test_data))
print(type(train_data[0]))
train_data[0].shape
plt.imshow(train_data[0])
iid_distribution = shfl.data_distribution.IidDataDistribution(database)
federated_data, test_data, test_labels = iid_distribution.get_federated_data(num_nodes=20, percent=10)
print(type(federated_data))
print(federated_data.num_nodes())
federated_data[0].private_data
###Output
_____no_output_____
###Markdown
The model
###Code
import tensorflow as tf
def model_builder():
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', strides=1, input_shape=(28, 28, 1)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2, strides=2, padding='valid'))
model.add(tf.keras.layers.Dropout(0.4))
model.add(tf.keras.layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', strides=1))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2, strides=2, padding='valid'))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(10, activation='softmax'))
criterion = tf.keras.losses.CategoricalCrossentropy()
optimizer = tf.keras.optimizers.RMSprop()
metrics = [tf.keras.metrics.categorical_accuracy]
return shfl.model.DeepLearningModel(model=model, criterion=criterion, optimizer=optimizer, metrics=metrics)
aggregator = shfl.federated_aggregator.FedAvgAggregator()
federated_government = shfl.federated_government.FederatedGovernment(model_builder, federated_data, aggregator)
import numpy as np
class Reshape(shfl.private.FederatedTransformation):
def apply(self, labeled_data):
labeled_data.data = np.reshape(labeled_data.data, (labeled_data.data.shape[0], labeled_data.data.shape[1], labeled_data.data.shape[2],1))
shfl.private.federated_operation.apply_federated_transformation(federated_data, Reshape())
import numpy as np
class Normalize(shfl.private.FederatedTransformation):
def __init__(self, mean, std):
self.__mean = mean
self.__std = std
def apply(self, labeled_data):
labeled_data.data = (labeled_data.data - self.__mean)/self.__std
mean = np.mean(train_data.data)
std = np.std(train_data.data)
shfl.private.federated_operation.apply_federated_transformation(federated_data, Normalize(mean, std))
###Output
_____no_output_____
###Markdown
Run the federated learning experiment
###Code
test_data = np.reshape(test_data, (test_data.shape[0], test_data.shape[1], test_data.shape[2],1))
federated_government.run_rounds(3, test_data, test_labels)
###Output
_____no_output_____
###Markdown
Federated learning: Simple experiment with seedIn this notebook we provide a simple example of how to make an experiment of a federated environment with the help of this framework. We are going to use a popular dataset to start the experimentation in a federated environment. The framework provides some functions to load the [Emnist](https://www.nist.gov/itl/products-and-services/emnist-dataset) Digits dataset.This notebook is a copy of [Basic Concepts](./federated_learning_basic_concepts.ipynb) notebook. The difference is that here we set a seed using [Reproducibility](https://github.com/sherpaai/Sherpa.ai-Federated-Learning-Framework/blob/master/shfl/private/reproducibility.py) Singleton Class in order to ensure de reproducibility of the experiment. If you execute this experiment many times, you should obtain the same results.
###Code
from shfl.private.reproducibility import Reproducibility
# Server
Reproducibility(1234)
# In case of client
# Reproducibility.get_instance().set_seed(ID)
import matplotlib.pyplot as plt
import shfl
database = shfl.data_base.Emnist()
train_data, train_labels, test_data, test_labels = database.load_data()
###Output
_____no_output_____
###Markdown
Let's inspect some properties of the loaded data.
###Code
print(len(train_data))
print(len(test_data))
print(type(train_data[0]))
train_data[0].shape
###Output
_____no_output_____
###Markdown
So, as we have seen, our dataset is composed of a set of matrices of 28 by 28. Before starting with the federated scenario, we can take a look to a sample in the training data.
###Code
plt.imshow(train_data[0])
###Output
_____no_output_____
###Markdown
We are going to simulate a federated learning scenario with a set of client nodes containing private data, and a central server that will be responsible to coordinate the different clients. But, first of all, we have to simulate the data contained in every client. In order to do that, we are going to use the previously loaded dataset. The assumption in this example will be the data is distributed as a set of independent and identically distributed random variables, having every node approximately the same amount of data. There are a set of different possibilities in order to distribute the data. The distribution of the data is one of the factors that could impact more a federated algorithm. Therefore, the framework contains the implementation of some of the most common distributions that allow you to experiment different situations easily. In [Federated Sampling](./federated_learning_sampling.ipynb) you can dig into the options that the framework provides at the moment.
###Code
iid_distribution = shfl.data_distribution.IidDataDistribution(database)
federated_data, test_data, test_labels = iid_distribution.get_federated_data(num_nodes=20, percent=10)
###Output
_____no_output_____
###Markdown
That's it! We have created federated data from the Emnist dataset using 20 nodes and 10 percent of the available data. This data is distributed to a set of data nodes in the form of private data. Let's learn a little more about the federated data.
###Code
print(type(federated_data))
print(federated_data.num_nodes())
federated_data[0].private_data
###Output
_____no_output_____
###Markdown
As we can see, private data in a node is not accesible directly but the framework provides mechanisms to use this data in a machine learning model. A federated learning algorithm is defined by a machine learning model locally deployed in each node that learns from the respective node’s private data and an aggregating mechanism to aggregate the different model parameters uploaded by the client nodes to a central node. In this example we will use a deep learning model using keras to build it. The framework provides classes to allow using Tensorflow (see [Basic Concepts Tensorflow](./federated_learning_basic_concepts_tensorflow.ipynb)) and Keras models into a federated learning scenario, your job is only to create a function acting as model builder. Moreover, the framework provides classes to allow using pretrained Tensorflow and Keras models (see [Basic Concepts Pretrained Models](./federated_learning_basic_concepts_pretrained_model.ipynb)). In this example build a Keras learning model.
###Code
import tensorflow as tf
def model_builder():
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', strides=1, input_shape=(28, 28, 1)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2, strides=2, padding='valid'))
model.add(tf.keras.layers.Dropout(0.4))
model.add(tf.keras.layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', strides=1))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2, strides=2, padding='valid'))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(10, activation='softmax'))
model.compile(optimizer="rmsprop", loss="categorical_crossentropy", metrics=["accuracy"])
return shfl.model.DeepLearningModel(model)
###Output
_____no_output_____
###Markdown
Now, the only piece missing is the aggregation operator. Nevertheless, the framework provides some aggregation operators that we can use. In the following piece of code we define the federated aggregation mechanism. Moreover, we define de federated goverment based on the keras learning model, the federated data and the aggregation mechanism.
###Code
aggregator = shfl.federated_aggregator.FedAvgAggregator()
federated_government = shfl.federated_government.FederatedGovernment(model_builder, federated_data, aggregator)
###Output
_____no_output_____
###Markdown
If you want to see all the aggregation operators you can check the following notebook [Federated Aggregation Operators](./federated_learning_basic_concepts_aggregation_operators.ipynb). Before running the algorithm, we want to apply a transformation to the data. The good practise to do that is to define a federated operation that will ensure that the transformation is applied to the federated data in all the client nodes. We want to reshape the data, so we define the following FederatedTransformation.
###Code
import numpy as np
class Reshape(shfl.private.FederatedTransformation):
def apply(self, labeled_data):
labeled_data.data = np.reshape(labeled_data.data, (labeled_data.data.shape[0], labeled_data.data.shape[1], labeled_data.data.shape[2],1))
shfl.private.federated_operation.apply_federated_transformation(federated_data, Reshape())
###Output
_____no_output_____
###Markdown
In addition, we want to normalize the data. We define a federated transformation using mean and standard deviation (std) parameters. We use mean and std estimated from the training set in this example. Although the ideal parameters would be an aggregation of the mean and std of each client's training datasets, we use the mean and std of the global dataset as a simple approximation.
###Code
import numpy as np
class Normalize(shfl.private.FederatedTransformation):
def __init__(self, mean, std):
self.__mean = mean
self.__std = std
def apply(self, labeled_data):
labeled_data.data = (labeled_data.data - self.__mean)/self.__std
mean = np.mean(train_data.data)
std = np.std(train_data.data)
shfl.private.federated_operation.apply_federated_transformation(federated_data, Normalize(mean, std))
###Output
_____no_output_____
###Markdown
We are now ready to execute our federated learning algorithm.
###Code
test_data = np.reshape(test_data, (test_data.shape[0], test_data.shape[1], test_data.shape[2],1))
federated_government.run_rounds(3, test_data, test_labels)
###Output
_____no_output_____ |
notebooks/Digit Recognizer.ipynb | ###Markdown
Says One Neuron To Another Neural network architectures1. Set up a new git repository in your GitHub account2. Pick two datasets fromhttps://en.wikipedia.org/wiki/List_of_datasets_for_machine-learning_research3. Choose a programming language (Python, C/C++, Java)4. Formulate ideas on how neural networks can be used to accomplish the task for the specific dataset5. Build a neural network to model the prediction process programmatically6. Document your process and results7. Commit your source code, documentation and other supporting files to the git repository in GitHub Dataset:`tf.keras.datasets.mnist.load_data(path="mnist.npz")`- This is a dataset of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images. - x_train, x_test: uint8 arrays of grayscale image data with shapes (num_samples, 28, 28).- y_train, y_test: uint8 arrays of digit labels (integers in range 0-9) with shapes (num_samples,).- License: Yann LeCun and Corinna Cortes hold the copyright of MNIST dataset, which is a derivative work from original NIST datasets. MNIST dataset is made available under the terms of the Creative Commons Attribution-Share Alike 3.0 license.- The data files train.csv and test.csv contain gray-scale images of hand-drawn digits, from zero through nine.- Each image is 28 pixels in height and 28 pixels in width, for a total of 784 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255, inclusive. Step-1 Preparing Environment
###Code
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist
###Output
_____no_output_____
###Markdown
Importing data
###Code
(x_train,y_train),(x_test,y_test) = mnist.load_data()
print(x_train.shape)
print(x_test.shape)
###Output
(60000, 28, 28)
(10000, 28, 28)
###Markdown
Normalizing data
###Code
x_train = x_train.reshape(60000,784)/255
x_test = x_test.reshape(10000,784)/255
###Output
_____no_output_____
###Markdown
Step 2: Initializing Parameters Weights and Bias Structure of Neural Network- Input Layer has 784 neurons(28 x 28)- Hidden Layer has 15 neurons- Output Layer has 10 neurons(10 classes)- `bias0` and `bias1` are used for forward propagation- `re_bias0` and `re_bias1` are used for backward propagation- `weight0` and `weight1` are used for forward propagation- `re_weight0` and `re_weight1` are used for backward propagation
###Code
bias0 = [0]*15
bias1 = [0]*10
re_bias0 = [0]*15
re_bias1 = [0]*10
weight0 = [[0 for i in range(784)]for i in range(15)]
weight1 = [[0 for i in range(15)]for i in range(10)]
re_weight0 = [[0 for i in range(784)]for i in range(15)]
re_weight1 = [[0 for i in range(15)]for i in range(10)]
for i in range(15):
bias0[i] = np.random.rand()*0.1
for i in range(10):
bias1[i] = np.random.rand()*0.1
for i in range(15):
for j in range(784):
weight0[i][j] = np.random.randn()*0.1
for i in range(10):
for j in range(15):
weight1[i][j] = np.random.randn()*0.1
###Output
_____no_output_____
###Markdown
Input and Output layers- `Input0` are the values that are given to hidden layer along with weight and bias- `Output0` are the output of hidden layer from our activation function sigmoid.- `Input1` are the values given to output layer- `Output1` is the prediction using softmax function.
###Code
Input0 = [0]*15
Input1 = [0]*10
Output0 = [0]*15
Output1 = [0]*10
Input0_test = [0]*15
Input1_test = [0]*10
Output0_test = [0]*15
Output1_test = [0]*10
###Output
_____no_output_____
###Markdown
Step 3: Defining all Methods Sigmoid function
###Code
def sigmoid(x):
return 1/(1+np.exp(-x))
###Output
_____no_output_____
###Markdown
Derivative of Sigmoid function- The derivative of the sigmoid function sigm at any x∈R is implemented as dsigm(x)dx:=sigm(x)(1−sigm(x))
###Code
def dsigm(x):
return sigmoid(x)*(1-sigmoid(x))
###Output
_____no_output_____
###Markdown
Softmax function
###Code
def softmax(x_array):
a = np.max(x_array)
exp_x = np.exp(x_array-a)
sum_exp_x = np.sum(exp_x)
y_array = exp_x/sum_exp_x
return y_array
###Output
_____no_output_____
###Markdown
Delta Function, Sum of Squares Error and Back Propagation Function
###Code
def delta(num,t_n,Op1,Ip1,we1):
sum_1 = 0
for i in range(10):
sum_1 += (Op1[i]-t_n[i])*we1[i][num]*dsigm(Ip1[i])
return sum_1
def sum_of_squares_error(y,t):
return 0.5*np.sum((y-t)**2)
def back_propagation(Out0,Out1,In0,In1,t_num,x_t,l_rate):
global weight0
global weight1
global bias0
global bias1
for i in range(10):
for j in range(15):
re_weight1[i][j] = (Out1[i]-t_num[i])*dsigm(In1[i])
weight1[i][j] -= l_rate*re_weight1[i][j]*Out0[j]
for i in range(15):
for j in range(784):
re_weight0[i][j] = delta(i,t_num,Out1,In1,weight1)*dsigm(In0[i])
weight0[i][j] -= l_rate*re_weight0[i][j]*x_t[j]
for i in range(10):
re_bias1[i] = (Out1[i]-t_num[i])*dsigm(In1[i])
bias1[i] -= l_rate*re_bias1[i]
for i in range(15):
re_bias0[i] = delta(i,t_num,Out1,In1,weight1)*dsigm(In0[i])
bias0[i] -= l_rate*re_bias0[i]
###Output
_____no_output_____
###Markdown
Accuracy Function
###Code
def accuracy(y_list,t_list,switch):
max_y = np.argmax(y_list,axis=1)
max_t = np.argmax(t_list,axis=1)
if switch == "train":
return np.sum(max_y == max_t)/100
elif switch == "test":
return np.sum(max_y == max_t)/ 10000
###Output
_____no_output_____
###Markdown
Function to visualize
###Code
def plot_figure(acc, loss, num, name):
x = list(range(num))
y = acc
z = loss
plt.plot(x, y, label = "accuracy")
plt.plot(x, z, label = "loss")
plt.legend(loc = "lower right")
plt.savefig("../reports/"+name+"_acc_loss.jpg")
###Output
_____no_output_____
###Markdown
Step 4: Hyperparameters- After changing the values of these hypermaters, I found that these had a decent performance.
###Code
learning_rate = 0.1
epochs = 12
input_words = 3
###Output
_____no_output_____
###Markdown
Step 5: Training the model
###Code
all_train_accuracy = []
all_train_loss = []
for l in range(epochs):
print("Epoch :"+str(l))
for k in range(input_words):
train_prediction = []
train_answer = []
print("Iteration "+str(l*input_words+k)+": ", end="")
for j in range(100):
for i in range(15):
Input0[i] = np.dot(x_train[k*100+j],weight0[i])+bias0[i]
Output0[i] = sigmoid(Input0[i])
for i in range(10):
Input1[i] = np.dot(Output0,weight1[i])+bias1[i]
Output1 = softmax(Input1)
train_num = [0]*10
train_num[y_train[k*100+j]] = train_num[y_train[k*100+j]]+1
train_prediction.append(Output1)
train_answer.append(train_num)
back_propagation(Output0,Output1,Input0,Input1,train_num,x_train[k*100+j],learning_rate)
train_acc = accuracy(train_prediction,train_answer,"train")
train_loss = sum_of_squares_error(Output1,train_num)
print(" train_accuracy = "+str(train_acc), end="\t")
print(" train_loss = "+str(train_loss))
all_train_accuracy.append(train_acc)
all_train_loss.append(train_loss)
number = epochs*input_words
plot_figure(all_train_accuracy, all_train_loss,number,"train")
###Output
Epoch :0
Iteration 0: train_accuracy = 0.08 train_loss = 0.44800527049156846
Iteration 1: train_accuracy = 0.17 train_loss = 0.41277282329473025
Iteration 2: train_accuracy = 0.19 train_loss = 0.4398694363653151
Epoch :1
Iteration 3: train_accuracy = 0.31 train_loss = 0.42196999227515947
Iteration 4: train_accuracy = 0.31 train_loss = 0.3939824183787834
Iteration 5: train_accuracy = 0.44 train_loss = 0.38579241278237525
Epoch :2
Iteration 6: train_accuracy = 0.55 train_loss = 0.3626184793003036
Iteration 7: train_accuracy = 0.56 train_loss = 0.3679904066667354
Iteration 8: train_accuracy = 0.65 train_loss = 0.3126803426901914
Epoch :3
Iteration 9: train_accuracy = 0.65 train_loss = 0.273897975609538
Iteration 10: train_accuracy = 0.67 train_loss = 0.324317224992224
Iteration 11: train_accuracy = 0.71 train_loss = 0.24326008880484337
Epoch :4
Iteration 12: train_accuracy = 0.73 train_loss = 0.20107053889390802
Iteration 13: train_accuracy = 0.74 train_loss = 0.2668678612495965
Iteration 14: train_accuracy = 0.77 train_loss = 0.18940932500858398
Epoch :5
Iteration 15: train_accuracy = 0.79 train_loss = 0.1504242369101271
Iteration 16: train_accuracy = 0.78 train_loss = 0.20602147570872104
Iteration 17: train_accuracy = 0.81 train_loss = 0.14843439529947305
Epoch :6
Iteration 18: train_accuracy = 0.83 train_loss = 0.11515809826063737
Iteration 19: train_accuracy = 0.85 train_loss = 0.152028481715012
Iteration 20: train_accuracy = 0.83 train_loss = 0.1174756246328594
Epoch :7
Iteration 21: train_accuracy = 0.87 train_loss = 0.08992709051429476
Iteration 22: train_accuracy = 0.87 train_loss = 0.1096669962000192
Iteration 23: train_accuracy = 0.84 train_loss = 0.09476447068570312
Epoch :8
Iteration 24: train_accuracy = 0.9 train_loss = 0.07148277106923524
Iteration 25: train_accuracy = 0.9 train_loss = 0.07882519171050721
Iteration 26: train_accuracy = 0.84 train_loss = 0.07820054773185757
Epoch :9
Iteration 27: train_accuracy = 0.92 train_loss = 0.05781075582816641
Iteration 28: train_accuracy = 0.94 train_loss = 0.05737041493036575
Iteration 29: train_accuracy = 0.85 train_loss = 0.06580322659625584
Epoch :10
Iteration 30: train_accuracy = 0.92 train_loss = 0.047511733441805815
Iteration 31: train_accuracy = 0.95 train_loss = 0.042773754260789415
Iteration 32: train_accuracy = 0.9 train_loss = 0.056185637752054665
Epoch :11
Iteration 33: train_accuracy = 0.93 train_loss = 0.03957564989504498
Iteration 34: train_accuracy = 0.97 train_loss = 0.032831649501395055
Iteration 35: train_accuracy = 0.93 train_loss = 0.04854949994935693
###Markdown
Step 6: Testing the model
###Code
test_prediction = []
test_answer = []
for j in range(10000):
for i in range(15):
Input0_test[i] = np.dot(x_test[j],weight0[i])+bias0[i]
Output0_test[i] = sigmoid(Input0_test[i])
for i in range(10):
Input1_test[i] = np.dot(Output0_test,weight1[i])+bias1[i]
Output1_test = softmax(Input1_test)
test_num = [0]*10
test_num[y_test[j]] = test_num[y_test[j]]+1
test_prediction.append(Output1_test)
test_answer.append(test_num)
test_acc = accuracy(test_prediction,test_answer,"test")
test_loss = sum_of_squares_error(Output1_test,test_num)
print("test_accuracy = "+str(test_acc), end="\t")
print("test_loss = "+str(test_loss))
###Output
test_accuracy = 0.7779 test_loss = 0.03969880784811204
###Markdown
Step 7: Visualizing the performance of our model
###Code
X_train__ = x_test.reshape(x_test.shape[0], 28, 28)
fig, axis = plt.subplots(4, 3, figsize=(15, 5))
for i, ax in enumerate(axis.flat):
randomindex=int(np.random.rand()*1000)
ax.imshow(X_train__[randomindex], cmap='binary')
digit = y_test[randomindex]
prediction=test_prediction[randomindex].argmax()
ax.axis(False)
ax.set(title = f"[Label: {digit}| Prediction: {prediction}]");
###Output
_____no_output_____ |
Jupyter/.ipynb_checkpoints/Python Data Structures - Lists-checkpoint.ipynb | ###Markdown
Lists
###Code
example1 = [1,2,3,4,]
example2 = ['a','b','c']
example3 = [1 , 'a', True]
x = ['M', 'O','N','T','Y',' ','P','Y','T','H','O','N']
print(x)
type(x)
x[0]
print(x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7], x[8], x[9], x[10], x[11])
x = [12, 43, 4, 1, 6, 343, 10]
x[0]
x[1]
x = [1.1, 3.5, 4.2, 9.4]
x[0]
x = ['himanshu', 'aggarwal', 'ironhack', 'data analysis']
x[0]
x = [1 , 'himanshu', 2.0, True]
x[0]
x[1]
x[3]
x = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
x[:]
x[:3]
x[3:]
x[3:5]
x = [12, 43, 4, 1, 6, 343, 10]
len(x)
x[6]
x[len(x)-1]
x = [1, 1.1, 23, 5.3, 5, 8.3, 'hello', True]
len(x)
x = [1, 1.1, 23, 5.3, 5, 8.3, 'hello', True]
x.index('hello')
x.index(8.3)
print(x)
x.append('hello')
print(x)
x.append('there')
print(x)
print(x)
x.pop()
print(x)
x.pop()
print(x)
###Output
[1, 1.1, 23, 5.3, 5, 8.3, 'hello', True]
###Markdown
Exercises 1.1
###Code
lst = [1,2,34,5,3,12,9, 8, 67, 89, 98, 90, 39, 21, 45, 46, 23, 13]
len(lst)
lst[0]
lst[17]
lst.index(90)
lst[0:8]
###Output
_____no_output_____ |
examples/Non RGB Example.ipynb | ###Markdown
Example of DenseCRF with non-RGB data This notebook goes through an example of how to use DenseCRFs on non-RGB data.At the same time, it will explain basic concepts and walk through an example, so it could be useful even if you're dealing with RGB data, though do have a look at [PyDenseCRF's README](https://github.com/lucasb-eyer/pydensecrfpydensecrf) too! Basic setup It is highly recommended you install PyDenseCRF through pip, for example `pip install git+https://github.com/lucasb-eyer/pydensecrf.git`, but if for some reason you couldn't, you can always use it like so after compiling it:
###Code
#import sys
#sys.path.insert(0,'/path/to/pydensecrf/')
import pydensecrf.densecrf as dcrf
from pydensecrf.utils import unary_from_softmax, create_pairwise_bilateral
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
Unary Potential The unary potential consists of per-pixel class-probabilities. This could come from any kind of model such as a random-forest or the softmax of a deep neural network. Create unary potential
###Code
from scipy.stats import multivariate_normal
H, W, NLABELS = 400, 512, 2
# This creates a gaussian blob...
pos = np.stack(np.mgrid[0:H, 0:W], axis=2)
rv = multivariate_normal([H//2, W//2], (H//4)*(W//4))
probs = rv.pdf(pos)
# ...which we project into the range [0.4, 0.6]
probs = (probs-probs.min()) / (probs.max()-probs.min())
probs = 0.5 + 0.2 * (probs-0.5)
# The first dimension needs to be equal to the number of classes.
# Let's have one "foreground" and one "background" class.
# So replicate the gaussian blob but invert it to create the probability
# of the "background" class to be the opposite of "foreground".
probs = np.tile(probs[np.newaxis,:,:],(2,1,1))
probs[1,:,:] = 1 - probs[0,:,:]
# Let's have a look:
plt.figure(figsize=(15,5))
plt.subplot(1,2,1); plt.imshow(probs[0,:,:]); plt.title('Foreground probability'); plt.axis('off'); plt.colorbar();
plt.subplot(1,2,2); plt.imshow(probs[1,:,:]); plt.title('Background probability'); plt.axis('off'); plt.colorbar();
###Output
_____no_output_____
###Markdown
Run inference with unary potential We can already run a DenseCRF with only a unary potential.This doesn't account for neighborhoods at all, so it's not the greatest idea, but we can do it:
###Code
# Inference without pair-wise terms
U = unary_from_softmax(probs) # note: num classes is first dim
d = dcrf.DenseCRF2D(W, H, NLABELS)
d.setUnaryEnergy(U)
# Run inference for 10 iterations
Q_unary = d.inference(10)
# The Q is now the approximate posterior, we can get a MAP estimate using argmax.
map_soln_unary = np.argmax(Q_unary, axis=0)
# Unfortunately, the DenseCRF flattens everything, so get it back into picture form.
map_soln_unary = map_soln_unary.reshape((H,W))
# And let's have a look.
plt.imshow(map_soln_unary); plt.axis('off'); plt.title('MAP Solution without pairwise terms');
###Output
_____no_output_____
###Markdown
Pairwise terms The whole point of DenseCRFs is to use some form of content to smooth out predictions. This is done via "pairwise" terms, which encode relationships between elements. Add (non-RGB) pairwise term For example, in image processing, a popular pairwise relationship is the "bilateral" one, which roughly says that pixels with either a similar color or a similar location are likely to belong to the same class.
###Code
NCHAN=1
# Create simple image which will serve as bilateral.
# Note that we put the channel dimension last here,
# but we could also have it be the first dimension and
# just change the `chdim` parameter to `0` further down.
img = np.zeros((H,W,NCHAN), np.uint8)
img[H//3:2*H//3,W//4:3*W//4,:] = 1
plt.imshow(img[:,:,0]); plt.title('Bilateral image'); plt.axis('off'); plt.colorbar();
# Create the pairwise bilateral term from the above image.
# The two `s{dims,chan}` parameters are model hyper-parameters defining
# the strength of the location and image content bilaterals, respectively.
pairwise_energy = create_pairwise_bilateral(sdims=(10,10), schan=(0.01,), img=img, chdim=2)
# pairwise_energy now contains as many dimensions as the DenseCRF has features,
# which in this case is 3: (x,y,channel1)
img_en = pairwise_energy.reshape((-1, H, W)) # Reshape just for plotting
plt.figure(figsize=(15,5))
plt.subplot(1,3,1); plt.imshow(img_en[0]); plt.title('Pairwise bilateral [x]'); plt.axis('off'); plt.colorbar();
plt.subplot(1,3,2); plt.imshow(img_en[1]); plt.title('Pairwise bilateral [y]'); plt.axis('off'); plt.colorbar();
plt.subplot(1,3,3); plt.imshow(img_en[2]); plt.title('Pairwise bilateral [c]'); plt.axis('off'); plt.colorbar();
###Output
_____no_output_____
###Markdown
Run inference of complete DenseCRF Now we can create a dense CRF with both unary and pairwise potentials and run inference on it to get our final result.
###Code
d = dcrf.DenseCRF2D(W, H, NLABELS)
d.setUnaryEnergy(U)
d.addPairwiseEnergy(pairwise_energy, compat=10) # `compat` is the "strength" of this potential.
# This time, let's do inference in steps ourselves
# so that we can look at intermediate solutions
# as well as monitor KL-divergence, which indicates
# how well we have converged.
# PyDenseCRF also requires us to keep track of two
# temporary buffers it needs for computations.
Q, tmp1, tmp2 = d.startInference()
for _ in range(5):
d.stepInference(Q, tmp1, tmp2)
kl1 = d.klDivergence(Q) / (H*W)
map_soln1 = np.argmax(Q, axis=0).reshape((H,W))
for _ in range(20):
d.stepInference(Q, tmp1, tmp2)
kl2 = d.klDivergence(Q) / (H*W)
map_soln2 = np.argmax(Q, axis=0).reshape((H,W))
for _ in range(50):
d.stepInference(Q, tmp1, tmp2)
kl3 = d.klDivergence(Q) / (H*W)
map_soln3 = np.argmax(Q, axis=0).reshape((H,W))
img_en = pairwise_energy.reshape((-1, H, W)) # Reshape just for plotting
plt.figure(figsize=(15,5))
plt.subplot(1,3,1); plt.imshow(map_soln1);
plt.title('MAP Solution with DenseCRF\n(5 steps, KL={:.2f})'.format(kl1)); plt.axis('off');
plt.subplot(1,3,2); plt.imshow(map_soln2);
plt.title('MAP Solution with DenseCRF\n(20 steps, KL={:.2f})'.format(kl2)); plt.axis('off');
plt.subplot(1,3,3); plt.imshow(map_soln3);
plt.title('MAP Solution with DenseCRF\n(75 steps, KL={:.2f})'.format(kl3)); plt.axis('off');
###Output
_____no_output_____ |
module1-join-and-reshape-data/LS_DSPT3_121_Join_and_Reshape_Data.ipynb | ###Markdown
_Lambda School Data Science_ Join and Reshape datasetsObjectives- concatenate data with pandas- merge data with pandas- understand tidy data formatting- melt and pivot data with pandasLinks- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)- [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data) - Combine Data Sets: Standard Joins - Tidy Data - Reshaping Data- Python Data Science Handbook - [Chapter 3.6](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html), Combining Datasets: Concat and Append - [Chapter 3.7](https://jakevdp.github.io/PythonDataScienceHandbook/03.07-merge-and-join.html), Combining Datasets: Merge and Join - [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping - [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables Reference- Pandas Documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html)- Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)- [Hadley Wickham's famous paper](http://vita.had.co.nz/papers/tidy-data.html) on Tidy Data Download dataWe’ll work with a dataset of [3 Million Instacart Orders, Open Sourced](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2)!
###Code
!wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
!tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
%cd /content/
!ls -lh *.csv
%cd instacart_2017_05_01
!ls -lh *.csv
%cd /content/
!rm -rf instacart_2017_05_01/
!rm instacart_online_grocery_shopping_2017_05_01.tar.gz
###Output
_____no_output_____
###Markdown
Download with Python
###Code
%cd /content/
import urllib.request
url = 'https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz'
file_name = 'instacart_online_grocery_shopping_2017_05_01.tar.gz'
urllib.request.urlretrieve(url, file_name)
import tarfile
tar = tarfile.open(file_name, "r:gz")
tar.extractall()
tar.close()
import os
print(os.getcwd())
os.chdir('/content/instacart_2017_05_01/')
print(os.getcwd())
import glob
glob.glob("/content/instacart_2017_05_01/*.csv")
###Output
_____no_output_____
###Markdown
Join Datasets Goal: Reproduce this exampleThe first two orders for user id 1:
###Code
from IPython.display import display, Image
url = 'https://cdn-images-1.medium.com/max/1600/1*vYGFQCafJtGBBX5mbl0xyw.png'
example = Image(url=url, width=600)
display(example)
###Output
_____no_output_____
###Markdown
Load dataHere's a list of all six CSV filenames
###Code
!ls -lh *.csv
###Output
-rw-r--r-- 1 502 staff 2.6K May 2 2017 aisles.csv
-rw-r--r-- 1 502 staff 270 May 2 2017 departments.csv
-rw-r--r-- 1 502 staff 551M May 2 2017 order_products__prior.csv
-rw-r--r-- 1 502 staff 24M May 2 2017 order_products__train.csv
-rw-r--r-- 1 502 staff 104M May 2 2017 orders.csv
-rw-r--r-- 1 502 staff 2.1M May 2 2017 products.csv
###Markdown
For each CSV- Load it with pandas- Look at the dataframe's shape- Look at its head (first rows)- `display(example)`- Which columns does it have in common with the example we want to reproduce?
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
aisles
###Code
aisles = pd.read_csv("aisles.csv")
aisles.head()
aisles.shape
display(example)
aisles.describe()
aisles.describe(exclude='number')
###Output
_____no_output_____
###Markdown
departments
###Code
departments = pd.read_csv('departments.csv')
departments.head()
departments.shape
display(example)
###Output
_____no_output_____
###Markdown
order_products__prior
###Code
order_products__prior = pd.read_csv('order_products__prior.csv')
order_products__prior.head()
order_products__prior.shape
###Output
_____no_output_____
###Markdown
We need:- order_id- product_id- add_to_cart_order order_products__train
###Code
order_products__train = pd.read_csv('order_products__train.csv')
order_products__train.head()
order_products__train.shape
###Output
_____no_output_____
###Markdown
orders
###Code
orders = pd.read_csv('orders.csv')
orders.head()
display(example)
###Output
_____no_output_____
###Markdown
We need:- order_id- user_id- order_number- order_dow- order_hour_of_day products
###Code
products = pd.read_csv('products.csv')
products.head()
products.shape
###Output
_____no_output_____
###Markdown
Concatenate order_products__prior and order_products__train
###Code
order_products = pd.concat([order_products__prior, order_products__train])
order_products.shape
print(order_products__prior.shape, order_products__train.shape, order_products.shape)
assert len(order_products__prior) + len(order_products__train) == len(order_products)
display(example)
###Output
_____no_output_____
###Markdown
Short `groupby` example
###Code
order_products.groupby('order_id')['product_id'].count().mean()
grouped_orders = order_products.groupby('order_id')
grouped_orders.get_group(2539329)
order_products[order_products['order_id'] == 2539329]
grouped_orders['product_id'].count()
grouped_orders['product_id'].count().hist()
grouped_orders['product_id'].count().hist(bins=50)
###Output
_____no_output_____
###Markdown
Get a subset of orders — the first two orders for user id 1 From `orders` dataframe:- user_id- order_id- order_number- order_dow- order_hour_of_day
###Code
orders.head()
orders.shape
condition = (orders['user_id'] == 1) & (orders['order_number'] <= 2)
columns = ['order_id','user_id', 'order_number', 'order_dow', 'order_hour_of_day']
subset = orders[condition][columns]
subset.head()
###Output
_____no_output_____
###Markdown
Merge dataframes Merge the subset from `orders` with columns from `order_products`
###Code
columns = ['order_id','product_id','add_to_cart_order']
merged = pd.merge(subset, order_products[columns])
merged.head()
display(example)
###Output
_____no_output_____
###Markdown
Merge with columns from `products`
###Code
final = pd.merge(merged, products[['product_id', 'product_name']])
final.head()
columns = ['user_id', 'order_id', 'order_number','order_dow','order_hour_of_day','add_to_cart_order', 'product_id','product_name']
final = final[columns]
final
final = final.sort_values(by=['order_number', 'add_to_cart_order'])
final
columns = [col.replace('_', ' ') for col in final.columns]
columns
final.columns = columns
final
display(example)
###Output
_____no_output_____
###Markdown
Reshape Datasets Why reshape data? Some libraries prefer data in different formatsFor example, the Seaborn data visualization library prefers data in "Tidy" format often (but not always).> "[Seaborn will be most powerful when your datasets have a particular organization.](https://seaborn.pydata.org/introduction.htmlorganizing-datasets) This format ia alternately called “long-form” or “tidy” data and is described in detail by Hadley Wickham. The rules can be simply stated:> - Each variable is a column- Each observation is a row> A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot." Data science is often about putting square pegs in round holesHere's an inspiring [video clip from _Apollo 13_](https://www.youtube.com/watch?v=ry55--J4_VQ): “Invent a way to put a square peg in a round hole.” It's a good metaphor for data wrangling! Hadley Wickham's ExamplesFrom his paper, [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
table1 = pd.DataFrame(
[[np.nan, 2],
[16, 11],
[3, 1]],
index=['John Smith', 'Jane Doe', 'Mary Johnson'],
columns=['treatmenta', 'treatmentb'])
table2 = table1.T
###Output
_____no_output_____
###Markdown
"Table 1 provides some data about an imaginary experiment in a format commonly seen in the wild. The table has two columns and three rows, and both rows and columns are labelled."
###Code
table1
###Output
_____no_output_____
###Markdown
"There are many ways to structure the same underlying data. Table 2 shows the same data as Table 1, but the rows and columns have been transposed. The data is the same, but the layout is different."
###Code
table2
###Output
_____no_output_____
###Markdown
"Table 3 reorganises Table 1 to make the values, variables and obserations more clear.Table 3 is the tidy version of Table 1. Each row represents an observation, the result of one treatment on one person, and each column is a variable."| name | trt | result ||--------------|-----|--------|| John Smith | a | - || Jane Doe | a | 16 || Mary Johnson | a | 3 || John Smith | b | 2 || Jane Doe | b | 11 || Mary Johnson | b | 1 | Table 1 --> TidyWe can use the pandas `melt` function to reshape Table 1 into Tidy format.
###Code
table1
table1.index
table1 = table1.reset_index()
table1
tidy = table1.melt(id_vars='index')
tidy
tidy.columns = ['name', 'trt', 'result']
tidy
###Output
_____no_output_____
###Markdown
Table 2 --> Tidy
###Code
##### LEAVE BLANK --an assignment exercise #####
###Output
_____no_output_____
###Markdown
Tidy --> Table 1The `pivot_table` function is the inverse of `melt`.
###Code
table1
tidy.pivot_table(index='name', columns='trt', values='result')
###Output
_____no_output_____
###Markdown
Tidy --> Table 2
###Code
##### LEAVE BLANK --an assignment exercise #####
###Output
_____no_output_____
###Markdown
Seaborn exampleThe rules can be simply stated:- Each variable is a column- Each observation is a rowA helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot."
###Code
import seaborn as sns
sns.catplot(x='trt', y='result', col='name',
kind='bar', data=tidy, height=3);
###Output
_____no_output_____
###Markdown
Now with Instacart data
###Code
products = pd.read_csv('products.csv')
order_products = pd.concat([pd.read_csv('order_products__prior.csv'),
pd.read_csv('order_products__train.csv')])
orders = pd.read_csv('orders.csv')
###Output
_____no_output_____
###Markdown
Goal: Reproduce part of this exampleInstead of a plot with 50 products, we'll just do two — the first products from each list- Half And Half Ultra Pasteurized- Half Baked Frozen Yogurt
###Code
from IPython.display import display, Image
url = 'https://cdn-images-1.medium.com/max/1600/1*wKfV6OV-_1Ipwrl7AjjSuw.png'
example = Image(url=url, width=600)
display(example)
###Output
_____no_output_____
###Markdown
So, given a `product_name` we need to calculate its `order_hour_of_day` pattern. Subset and MergeOne challenge of performing a merge on this data is that the `products` and `orders` datasets do not have any common columns that we can merge on. Due to this we will have to use the `order_products` dataset to provide the columns that we will use to perform the merge.
###Code
product_names = ['Half And Half Ultra Pasteurized', 'Half Baked Frozen Yogurt']
products.columns
orders.columns
order_products.columns
merged = (products[['product_id', 'product_name']]
.merge(order_products[['order_id', 'product_id']])
.merge(orders[['order_id', 'order_hour_of_day']]))
merged.head()
condition = merged['product_name'].isin(product_names)
subset = merged[condition]
subset.head()
assert sorted(list(subset['product_name'].unique())) == sorted(product_names)
###Output
_____no_output_____
###Markdown
4 ways to reshape and plot 1. value_counts
###Code
froyo = subset[subset['product_name'] == 'Half Baked Frozen Yogurt']
cream = subset[subset['product_name'] == 'Half And Half Ultra Pasteurized']
cream.head()
cream['order_hour_of_day'].value_counts(normalize=True).sort_index().plot()
froyo['order_hour_of_day'].value_counts(normalize=True).sort_index().plot();
###Output
_____no_output_____
###Markdown
2. crosstab
###Code
pd.crosstab(subset['order_hour_of_day'], subset['product_name'], normalize='columns').plot()
###Output
_____no_output_____
###Markdown
3. Pivot Table
###Code
subset.pivot_table(index='order_hour_of_day', columns='product_name', values='order_id', aggfunc=len).plot()
###Output
_____no_output_____
###Markdown
4. melt
###Code
table = pd.crosstab(subset['order_hour_of_day'],
subset['product_name'],
normalize=True)
table.head()
melted = (table
.reset_index()
.melt(id_vars='order_hour_of_day')
.rename(columns={
'order_hour_of_day': 'Hour of Day Ordered',
'product_name': 'Product',
'value': 'Percent of Orders by Product'
}))
melted
import seaborn as sns
sns.relplot(x='Hour of Day Ordered',
y='Percent of Orders by Product',
hue='Product',
data=melted,
kind='line');
###Output
_____no_output_____ |
notebooks/3_fe_on_large_data_dask.ipynb | ###Markdown
In this notebook, I will use the best of both the worlds:- Use `tsfresh` to extract features- Use `Dask` for parallelization and handling larger than memory dataset - Dask will distribute the jobs across multiple cores (single machine or distributed cluster) - Dask DataFrame utlizes out of core computing This notebook is divided into two sections- Dask Basics- Automated FE using `tsfresh` & `Dask`
###Code
import glob
import os
import sys
import pandas as pd
import numpy as np
import dask
from dask.distributed import Client, LocalCluster
import dask.dataframe as dd
def get_segment_id_from_path(df, path):
"""
Returns the segment_id from the path of the file
"""
df.segment_id = df.segment_id.str.replace(path, "", regex=False)
df.segment_id = df.segment_id.str.replace(".csv", "", regex=False)
df.segment_id = df.segment_id.astype(np.int64)
return df
def append_time_column(df):
df["time"] = range(0, len(df))
return df
# Path for raw data
DATA_DIR = "/datadrive/arnab/vssexclude/kaggle/volcano/data/train"
# Path to save generated features
FEATURE_PATH = "/datadrive/arnab/vssexclude/kaggle/volcano/data/features"
# Define the datatypes for different sensor data
data_types = {"sensor_1" : np.float32,
"sensor_2" : np.float32,
"sensor_3" : np.float32,
"sensor_4" : np.float32,
"sensor_5" : np.float32,
"sensor_6" : np.float32,
"sensor_7" : np.float32,
"sensor_8" : np.float32,
"sensor_9" : np.float32,
"sensor_10" : np.float32}
###Output
_____no_output_____
###Markdown
Dask Basics Dask ArchitechtureTechnically, Dask is a centrally managed distributed service with distributed storage and execution with the workers and peer to peer communication. What is a Client?The Client connects users to a Dask cluster. After a Dask cluster is setup, we initialize a Client by pointing it to the address of a Scheduler:```pythonfrom distributed import Clientclient = Client("1.2.3.4:8786")``` Here we are creating a Local Cluster and then connecting the Dask Client to the Local Cluster. By specifying `n_worker=10`, we have asked to dask to start `10` independent python processes. Based on the nature of the cluster, they may run in the same machine or different machines.
###Code
cluster = LocalCluster(n_workers=8,
threads_per_worker=1,
scheduler_port=8786,
memory_limit='2GB')
client = Client(cluster)
client
###Output
_____no_output_____
###Markdown
Read Data
###Code
!ls -lrt {DATA_DIR}/1408*.csv | wc -l
%%time
ddf = dd.read_csv(
urlpath=f"{DATA_DIR}/1408*.csv",
blocksize=None,
dtype=data_types,
include_path_column='segment_id')
###Output
CPU times: user 95.7 ms, sys: 23.2 ms, total: 119 ms
Wall time: 135 ms
###Markdown
What just happened:- Dask just checked the input path and found that there are multiple CSV files matching the path description- It has not really loaded the content of the individual CSV files yet. - Nothing happens in the Dask UI, because these operations are just setting up a task graph which will be executed later- Dask is lazy by default. It will load all the CSV files into the memory **in parallel** only when we ask for any result- We can ask for result by invoking `compute()` methodNote:- None value for `blocksize` creates single partition for each CSV file
###Code
ddf
###Output
_____no_output_____
###Markdown
What is Dask DataFrame?- Dask DataFrame API extends Pandas to work on **larger than memory** datasets on laptops or distributed datasets across the clusters- It reuses lot of Pandas' code and extends the scale. How Dask DataFrame is constructed? Observations- This Dask DataFrame is composed of 4 Pandas DataFrame- It has the column names and data types- It has 4 tasks, i.e. 4 small Python functions which must be run to execute this entire Dask DataFrame.
###Code
ddf.visualize()
###Output
_____no_output_____
###Markdown
Let's compute the maximum value of the `sensor_1` feature
###Code
ddf.sensor_1.max()
ddf.sensor_1.max().visualize()
ddf.sensor_1.max().compute()
type(ddf.sensor_1.max().compute())
###Output
_____no_output_____
###Markdown
What just happened?- Dask checked the input path. Identified the matching files- A bunch of jobs were created. Here, one job per chunk/partition. - Each CSV file is read from the memory and loaded into a Pandas Dataframe- For each Pandas DataFrame, maximum value of `sensor_1` feature is computed- Results from multiple Pandas DataFrame are combined to get the final result, i.e., the maximum value of `sensor_1` across all the CSVs- Look at the Dask Dashboard before and after the compute()- Note: **The result of `compute()` must fit in-memory.** How to parallelize a custom function working on individual partitions? Problem Statement- I have a function which works well on one Pandas DataFrame. How can I parallelize it over multiple Pandas DataFrame?`map_partitions()` is the answer. It applies the function in an **embarrassingly parallel** way to multiple Pandas DataFrame Calculate the percentage of missing values across sensors for all the segments
###Code
def get_missing_sensors(df):
"""
Returns a DataFrame consisting percentage of missing data per sensor
"""
df_missing_percentage = df.isna().mean().to_frame().transpose()
df_missing_percentage = df_missing_percentage.astype(np.float16)
return df_missing_percentage
df_train_seg_missing = ddf.map_partitions(get_missing_sensors).compute()
ddf.map_partitions(get_missing_sensors).visualize()
client.close()
cluster.close()
###Output
_____no_output_____
###Markdown
Automated FE using `tsfresh` & `Dask` Here, input data starts from the hard drive & output (extracted features) will end on the hard drive. In between, Dask will read input data chunk by chunk, extract features and write to hard drive. Steps- Create a Dask Cluster and connect a Client to it.- Read data using Dask DataFrame from hard drive.- Extract features using `tsfresh.feature_extraction.extract_features`. Dask parallelizes execution of this function using `map_partitions`.- Write the extracted features to hard drive segment by segment. 1. Create a Dask Cluster and connect a Client to it
###Code
cluster = LocalCluster(n_workers=8,
threads_per_worker=1,
scheduler_port=8786,
memory_limit='3GB')
client = Client(cluster)
client
###Output
_____no_output_____
###Markdown
2. Read Data using Dask DataFrame
###Code
ddf = dd.read_csv(
urlpath=f"{DATA_DIR}/1*.csv",
blocksize=None,
usecols=["sensor_1", "sensor_4"],
dtype=data_types,
include_path_column='segment_id')
# Use the first 1000 observations
ddf = ddf.loc[0:999, :]
# Insert a new column with segment_id along with the values from 10 sensors
ddf = ddf.map_partitions(get_segment_id_from_path, f"{DATA_DIR}/")
# Add a column named time with ascending values staring from 0 representing time
ddf = ddf.map_partitions(append_time_column)
ddf = ddf.fillna(0)
ddf
###Output
_____no_output_____
###Markdown
3. Generate Features for individual partitions in parallel using DaskHere I am going to parallize the function `tsfresh.feature_extraction.extract_features()` using
###Code
from tsfresh.feature_extraction import extract_features
from tsfresh.feature_extraction.settings import MinimalFCParameters
def custom_extract_features(df, column_id, column_sort, default_fc_parameters):
"""
Generate features using `extract_features` of `tsfresh` and then rename and
reset axis.
Setting `n_jobs` to 0 disable multiprocessing functionality
"""
feature_df = extract_features(df,
column_id=column_id,
column_sort=column_sort,
n_jobs=0,
default_fc_parameters=default_fc_parameters,
disable_progressbar=True)
feature_df = feature_df.rename_axis("segment_id").reset_index(drop=False)
feature_df.segment_id = feature_df.segment_id.astype('category')
return feature_df
my_fc = {
'maximum': None,
'minimum': None
}
ddf_features = ddf.map_partitions(custom_extract_features,
column_id='segment_id',
column_sort='time',
default_fc_parameters=my_fc)
ddf_features
###Output
_____no_output_____
###Markdown
4. Write extracted features back to hard drive
###Code
ddf_features.to_parquet(
path=f"{FEATURE_PATH}",
write_index=False,
partition_on="segment_id",
engine="pyarrow",
append=False)
###Output
_____no_output_____
###Markdown
5. Read generated features for verification Read using Pandas
###Code
SEGMENT_ID = "1999605295"
df = pd.read_parquet(f"{FEATURE_PATH}/segment_id={SEGMENT_ID}")
df.head()
###Output
_____no_output_____
###Markdown
Read using Dask
###Code
ddf_features_from_disk = dd.read_parquet(path=f"{FEATURE_PATH}/*/*.parquet")
ddf_features_from_disk
ddf_features_from_disk.partitions[3].compute()
client.close()
cluster.close()
###Output
_____no_output_____ |
Modulo2/Code/2.3.-Aprendizaje No supervizado Kmeans.ipynb | ###Markdown
Módulo II: Aprendizaje No supervizado: Kmeans Introducción K-Means es un algoritmo no supervisado de Clustering. Se utiliza cuando tenemos un montón de datos sin etiquetar. El objetivo de este algoritmo es el de encontrar “K” grupos (clusters) entre los datos crudos. **¿Cómo funciona?** El algoritmo trabaja iterativamente para asignar a cada “muestra” uno de los “K” grupos basado en sus características. Son agrupados en base a la similitud de sus features (las columnas). Como resultado de ejecutar el algoritmo tendremos:> Los `“centroids”` de cada grupo que serán unas “coordenadas” de cada uno de los K conjuntos qu>e se utilizarán para poder etiquetar nuevas muestras.> `Etiquetas` para el conjunto de datos de entrenamiento. Cada etiqueta perteneciente a uno de los K grupos formados.Los grupos se van definiendo de manera “orgánica”, es decir que se va ajustando su posición en cada iteración del proceso, hasta que converge el algoritmo. Una vez hallados los centroids deberemos analizarlos para ver cuales son sus características únicas, frente a la de los otros grupos. Estos grupos son las etiquetas que genera el algoritmo. Casos de Uso de K-Means Algunos casos de uso son:> **Segmentación por Comportamiento:** relacionar el carrito de compras de un usuario, sus tiempos de acción e información del perfil.> **Categorización de Inventario:** agrupar productos por actividad en sus ventasDetectar anomalías o actividades sospechosas: según el comportamiento en una web reconocer un troll -o un bot- de un usuario normal Algoritmo K-means El algoritmo utiliza una proceso **iterativo** en el que se van ajustando los grupos para producir el resultado final. Para ejecutar el algoritmo deberemos pasar como entrada el `conjunto de datos` y un valor de `K`. El conjunto de datos serán las características o features para cada punto. Las posiciones iniciales de los K centroids serán asignadas de manera aleatoria de cualquier punto del conjunto de datos de entrada. Luego se itera en dos pasos:> 1.- **Paso de asignación** $argmin_{c_i \in C} dist(c_i, x)^2$> 2.- **Paso de actualización del Centroide** En este paso los centroides de cada grupo son recalculados. Esto se hace tomando una media de todos los puntos asignados en el paso anterior. $c_i = \frac{1}{|s_i|}\sum_{x_i \in s_i} x_i$El algoritmo itera entre estos pasos hasta cumplir un criterio de detención:* si no hay cambios en los puntos asignados a los grupos,* o si la suma de las distancias se minimiza,* o se alcanza un número máximo de iteraciones.El algoritmo converge a un resultado que puede ser el óptimo local, por lo que será conveniente volver a ejecutar más de una vez con puntos iniciales aleatorios para confirmar si hay una salida mejor. Criterios de Elección de Grupos> Criterio del codo> Criterio del gradiente Ejemplo 1
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
#%% Generar datos aleatorios
#%% Aplicar el algoritmo Kmeans
#%% Criterio de selección
#%% Definiendo el número de grupos optimos
#%% Aplicar el algoritmo Kmeans con 2 grupos
###Output
_____no_output_____
###Markdown
Ejemplo 2
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
import pandas as pd
#%% Leer los datos
#%% drop de columnas time y class
#%% Estandarizar los datos
#%% Aplicar el algoritmo de clustering
# Aplicar el criterio de selección del codo
# plot de las inercias
#%% Ejecutar el algoritmo con k = 11
#%% Obtener los centroides
# Eligiendo 3 variables para plotear
# Creating figure
# Creating plot
###Output
_____no_output_____
###Markdown
Ejemplo 2Tiene un centro comercial de supermercado y, a través de las tarjetas de membresía, tiene algunos datos básicos sobre sus clientes, como ID de cliente, edad, sexo, ingresos anuales y puntaje de gastos.Usted es el propietario del centro comercial y desea comprender a sus clientes. Desea saber quienes clientes pueden ser clientes objetivos para que el equipo de marketing planifique una campaña.**¿Quiénes son sus clientes objetivo con los que puede iniciar la estrategia de marketing?**Para responder la pregunta anterior necesitamos realizar lo siguiente:>1.- data quality report dqr >2.- Limpieza de datos>3.- Analisis exploratorio de datos EDA>4.- Aplicar el criterio de selección de grupos -> el número opt de grupos>5.- Aplican kmeans con el num opt de grupos>6.- Conclusiones o comentarios acerca de los resultados
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.cluster import KMeans
from CDIN import CDIN as cd
#%% Leer los datos
#%% 1.- data quality report dqr
#%% 2.- Limpieza de datos
#%% 3.- EDA
## 1er insight
## 2do insight (rango de edades)
#%% 4.- Aplicar el criterio de selección de grupos
# Visualizando el criterio del codo, se observa que con 5 grupos
# se puede obtener una buena clasificación
#%% 5.- Aplican kmeans con el num opt de grupos
#%% 6.- Conclusiones o comentarios acerca de los resultados
# Visualizar todos los clusters
###Output
_____no_output_____
###Markdown
Actividad 3Agrupar usuarios Twitter de acuerdo a su personalidad con K-means.>1.- data quality report dqr >2.- Limpieza de datos>3.- Analisis exploratorio de datos EDA (obtener al menos 3 insights)>4.- Aplicar el criterio de selección de grupos -> el número opt de grupos>5.- Aplican kmeans con el num opt de grupos>6.- Graficar, concluir y comentar acerca de los resultados.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sb
from sklearn.cluster import KMeans
from sklearn.metrics import pairwise_distances_argmin_min
from mpl_toolkits.mplot3d import Axes3D
#Leer datos
## tabla de información estadística que nos provee Pandas dataframe:
# dqr del dataframe
###Output
_____no_output_____
###Markdown
El archivo contiene diferenciadas 9 categorías -actividades laborales- que son:1-> Actor/actriz2->Cantante3->Modelo4->Tv, series5->Radio6->Tecnología7->Deportes8->Politica9->Escritor
###Code
## Histogramas
###Output
_____no_output_____
###Markdown
Las variables que nos pueden servir para la agrupación pueden ser `["op","ex","ag"]`
###Code
# Crear la figura
# Plotear
###Output
_____no_output_____
###Markdown
Elección de los grupos óptimosVamos a hallar el valor de K mediante el criterio del codo
###Code
# Criterio del codo
# plot de las inercias
###Output
_____no_output_____
###Markdown
Realmente la curva es bastante “suave”. Considero a 5 como un buen número para K. Según vuestro criterio podría ser otro.
###Code
#Aplicar kmeans con el num opt de grupos
###Output
_____no_output_____
###Markdown
Clasificar nuevas muestras podemos agrupar y etiquetar nuevos usuarios twitter con sus características y clasificarlos.
###Code
## Obtener el grupo de una nueva muestra
###Output
_____no_output_____
###Markdown
Ejemplo 4
###Code
## Importatr digitos
# Cluster por Kmeans
###Output
_____no_output_____ |
NEU_ADS_Student_Project_Portfolio_Examples/Detection of Brain Illnesses using Machine Learning/Project/PortfolioBlog.ipynb | ###Markdown
PORTFOLIO BLOG INFO 7390 Vignesh MuraliNUID: 001886775 What is Alzheimer's Disease?Alzheimer's disease is the most common cause of dementia — a group of brain disorders that cause the loss of intellectual and social skills. In Alzheimer's disease, the brain cells degenerate and die, causing a steady decline in memory and mental function.
###Code
from IPython.display import Image
from IPython.core.display import HTML
Image(url= "https://www.nia.nih.gov/sites/default/files/inline-images/brain_slices_alzheimers_0.jpg")
###Output
_____no_output_____
###Markdown
What are we trying to do?In this blog, we are trying to explain how we can build Machine Learning classification models to detect the presence of Alzheimer's Disease using existing medical data.Before we proceed let's define some essential concepts which are to be known. Supervised Learning: Supervised learning is where you have input variables (x) and an output variable (Y) and you use an algorithm to learn the mapping function from the input to the output.Y = f(X)The goal is to approximate the mapping function so well that when you have new input data (x) that you can predict the output variables (Y) for that data.It is called supervised learning because the process of an algorithm learning from the training dataset can be thought of as a teacher supervising the learning process. Classification: A classification model attempts to draw some conclusion from observed values. Given one or more inputs a classification model will try to predict the value of one or more outcomes. Outcomes are labels that can be applied to a dataset. For example, when filtering emails “spam” or “not spam”.There are various classification models in Machine Learning such as Random Forests Classifier and Naive Baye's Classifier. Neural Networks:Artificial neural networks (ANNs) or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" (i.e. progressively improve performance on) tasks by considering examples, generally without task-specific programming.A deep neural network (DNN) is an artificial neural network (ANN) with multiple hidden layers between the input and output layers. Let's get started!We still start off by obtaining the dataset which we are going to use.The dataset has been obtained from https://www.oasis-brains.org/.- This set consists of a longitudinal collection of 150 subjects aged 60 to 96. Each subject was scanned on two or more visits, separated by at least one year for a total of 373 imaging sessions. - For each subject, 3 or 4 individual T1-weighted MRI scans obtained in single scan sessions are included. The subjects are all right-handed and include both men and women. - 72 of the subjects were characterized as nondemented throughout the study. 64 of the included subjects were characterized as demented at the time of their initial visits and remained so for subsequent scans, including 51 individuals with mild to moderate Alzheimer’s disease. - Another 14 subjects were characterized as nondemented at the time of their initial visit and were subsequently characterized as demented at a later visit. The first step is to import all the required packages
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import tree
from sklearn import datasets, linear_model, metrics
from sklearn.metrics import confusion_matrix,accuracy_score
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.decomposition import PCA
from sklearn.cross_validation import KFold
from sklearn.preprocessing import normalize, StandardScaler
from scipy.stats import multivariate_normal
from collections import Counter
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder, LabelBinarizer
from keras.wrappers.scikit_learn import KerasClassifier
from keras.models import Sequential
from keras.layers import Dense, Activation
###Output
_____no_output_____
###Markdown
Next we clean the dataset of null values and unwanted columns
###Code
df=pd.read_csv('oasis_longitudinal.csv')
df2=df
df.isnull().sum()
df = df.fillna(method='ffill')
df.isnull().sum()
df = df.drop('Hand',1)
###Output
_____no_output_____
###Markdown
Now our data is ready for preprocessing and analysis!It is important to remove irrelevant columns from our dataset because they could affect the performance of our model. PreprocessingWe map categorical values to integer values and we standardize our data using StandardScaler() because some classification models perform better with standardized data.
###Code
X = df.drop('Group', axis=1)
X = X.drop(['Subject ID','MRI ID','M/F','SES','Visit'], axis=1)
y = df['Group']
size_mapping={'Demented':1,'Nondemented':2,'Converted':3,'M':4,'F':5}
df2['Group'] = df2['Group'].map(size_mapping)
from sklearn.preprocessing import normalize, StandardScaler
sc_x = StandardScaler()
X2 = sc_x.fit_transform(X)
size_mapping={'Demented':1,'Nondemented':2,'Converted':3,'M':4,'F':5}
df2['Group'] = df2['Group'].map(size_mapping)
###Output
_____no_output_____
###Markdown
Split data into a Training Set and a Test SetThe training set contains a known output and the model learns on this data in order to be generalized to other data later on.We have the test dataset (or subset) in order to test our model’s prediction on this subset.
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
X_train2, X_test2, y_train2, y_test2 = train_test_split(X2, y, random_state=1)
###Output
_____no_output_____
###Markdown
Selecting best features for classificationAll kinds of tree methods calculate their splits by mathematically determining which split will most effectively help distinguish the classes. This is how the Random Forest method ranks it's features based on their importances depending on which feature allows the best split.
###Code
from sklearn.ensemble import RandomForestClassifier
random_forest = RandomForestClassifier(n_estimators=40, max_depth=5, random_state=1,max_features=5)
random_forest.fit(X_train, y_train)
importances=100*random_forest.feature_importances_
sorted_feature_importance = sorted(zip(importances, list(X_train)), reverse=True)
features_pd = pd.DataFrame(sorted_feature_importance)
print(features_pd)
sns.barplot(x=0, y=1, data=features_pd,palette='Reds');
plt.show()
###Output
0 1
0 63.501291 CDR
1 12.377521 MMSE
2 8.972169 MR Delay
3 4.064768 nWBV
4 4.039277 Age
5 2.810986 ASF
6 2.342095 eTIV
7 1.891893 EDUC
###Markdown
Clinical Dementia Rating (CDR) seems to be the most important feature.The Clinical Dementia Rating or CDR is a numeric scale used to quantify the severity of symptoms of dementia.CDR:- 0 No dementia- 0.5 Slightly Dementia- 1 Demented- 2 Severely DementedWe may eliminate the 3 lowest features to improve the accuracy of our model. Classification of dataNow as we have cleaned, pre-processed, split and selected features for our dataset, we may finally apply the classification models and view the results produced. **We start off with the Support Vector Classifier.**A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples. First we create the model with desired parameters.
###Code
Image(url= "http://38.media.tumblr.com/0e459c9df3dc85c301ae41db5e058cb8/tumblr_inline_n9xq5hiRsC1rmpjcz.jpg")
from sklearn.svm import SVC
supvc = SVC(kernel='linear',C=2)
###Output
_____no_output_____
###Markdown
We attempt to fit our training data into the model we just created
###Code
supvc.fit(X_train2,y_train2)
###Output
_____no_output_____
###Markdown
Now that the model has sucessfully fit the data, we may predict new values using the test data.Then using the accuray_score module from Sci-Kit learn's metrics set, we may view how well the model performed
###Code
y_predict = supvc.predict(X_test2)
svcscore=accuracy_score(y_test2,y_predict)*100
print('Accuracy of Support vector classifier is ')
print(100*accuracy_score(y_test2,y_predict))
###Output
Accuracy of Support vector classifier is
92.5531914893617
###Markdown
Let us construct the confusion matrix to view the exact number of accurate predictions
###Code
from sklearn.metrics import confusion_matrix
pd.DataFrame(
confusion_matrix(y_test, y_predict),
columns=['Predicted Healthy', 'Predicted Alzheimers','Predicted Converted'],
index=['True Healthy', 'True Alzheimers','True converted']
)
###Output
_____no_output_____
###Markdown
Observations:- Extremely low accuracy of 56% when using the RBF kernel.- High computation time on poly kernel & 90% accuracy.- Highest accuracy obtained on the linear kernel with 92.55%.- Accuracy slightly increases when penalty parameter C is set to 2.We have sucessfully classified patients into "Demented" or "Nondemented" with Support Vector Classifier with an accuracy of 92.55%! Similarly, this process can be repeated with several other classification models provided by Sci-Kit Learn to perform classification.You can choose from the following classification models and discover the most accurate one for this cause.http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html **Using Random Forests Classifier**A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting.
###Code
Image(url= "http://www.globalsoftwaresupport.com/wp-content/uploads/2018/02/ggff5544hh.png")
from sklearn.metrics import accuracy_score
y_predict = random_forest.predict(X_test)
rfscore = 100*accuracy_score(y_test, y_predict)
print('Accuracy of Random Forests Classifier Accuracy is ')
print(100*accuracy_score(y_test,y_predict))
from sklearn.metrics import confusion_matrix
pd.DataFrame(
confusion_matrix(y_test, y_predict),
columns=['Predicted Healthy', 'Predicted Alzheimers','Predicted Converted'],
index=['True Healthy', 'True Alzheimers','True converted']
)
###Output
Accuracy of Random Forests Classifier Accuracy is
92.5531914893617
###Markdown
Observations:- The highest accuracy was attained when max_features was set to 5.- When 5 features are considered for the best split, we obtain the greatest accuracy in this model (92.55%)- Standardization does not make a difference to the accuracy. **Using K Nearest Neighbors**K nearest neighbors is a simple algorithm that stores all available cases and classifies new cases based on a similarity measure (e.g., distance functions).
###Code
Image(url= "http://adataanalyst.com/wp-content/uploads/2016/07/kNN-1.png")
from sklearn.neighbors import KNeighborsClassifier
nneighbor = KNeighborsClassifier(n_neighbors=8,metric='euclidean')
nneighbor.fit(X_train2, y_train2)
y_predict = nneighbor.predict(X_test2)
knscore = 100*accuracy_score(y_test2, y_predict)
print('Accuracy of K Nearest Neighbors Classifier is ')
print(100*accuracy_score(y_test2,y_predict))
pd.DataFrame(
confusion_matrix(y_test2, y_predict),
columns=['Predicted Healthy', 'Predicted Alzheimers','Predicted Converted'],
index=['True Healthy', 'True Alzheimers','True converted']
)
###Output
Accuracy of K Nearest Neighbors Classifier is
88.29787234042553
###Markdown
Observations:- Accuracy plateaus after using 8 neighbors.- Accuracy remains the same with all distance measures ( minkowski, manhattan, euclidean ). **Using Decision Tree Classifier**Decision tree learning uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves).
###Code
Image(url= "http://dataaspirant.com/wp-content/uploads/2017/01/B03905_05_01-compressor.png")
from sklearn.tree import DecisionTreeClassifier
dectree = DecisionTreeClassifier(max_features=5)
dectree.fit(X_train, y_train)
y_predict = dectree.predict(X_test)
decscore=100*accuracy_score(y_test, y_predict)
print('Accuracy of Decision Tree Classifier is ')
print(100*accuracy_score(y_test,y_predict))
pd.DataFrame(
confusion_matrix(y_test, y_predict),
columns=['Predicted Healthy', 'Predicted Alzheimers','Predicted Converted'],
index=['True Healthy', 'True Alzheimers','True converted']
)
###Output
Accuracy of Decision Tree Classifier is
77.6595744680851
###Markdown
Observations:- Max_features is selected as 5, this means that when 5 features are selected for the best split, accuracy is the highest. **Using Naive Baye's Classifier**Naive Bayes is a kind of classifier which uses the Bayes Theorem. It predicts membership probabilities for each class such as the probability that given record or data point belongs to a particular class. The class with the highest probability is considered as the most likely class.
###Code
Image(url= "http://www.saedsayad.com/images/Bayes_rule.png")
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnb.fit(X_train,y_train)
y_predict = gnb.predict(X_test)
nbscore = 100*accuracy_score(y_test, y_predict)
print('Accuracy of Naive Bayes Classifier is ')
print(100*accuracy_score(y_test,y_predict))
pd.DataFrame(
confusion_matrix(y_test, y_predict),
columns=['Predicted Healthy', 'Predicted alzheimers','Predicted Converted'],
index=['True Healthy', 'True alzheimers','True converted']
)
###Output
Accuracy of Naive Bayes Classifier is
90.42553191489363
###Markdown
Observations:- Parameters have not been tuned because the only parameter available for tuning is priors (Prior probabilities of the class).- It is best to leave priors at 'None' because the priors will be adjusted automatically based on the data. **Using Ada Boost Classifier**Ada-boost classifier combines weak classifier algorithm to form strong classifier. A single algorithm may classify the objects poorly. But if we combine multiple classifiers with selection of training set at every iteration and assigning right amount of weight in final voting, we can have good accuracy score for overall classifier.
###Code
Image(url= "https://www.researchgate.net/profile/Brendan_Marsh3/publication/306054843/figure/fig3/AS:393884896120846@1470920885933/Training-of-an-AdaBoost-classifier-The-first-classifier-trains-on-unweighted-data-then.png")
from sklearn.ensemble import AdaBoostClassifier
abc = AdaBoostClassifier(algorithm='SAMME')
abc.fit(X_train2,y_train2)
y_predict = abc.predict(X_test2)
abcscore=accuracy_score(y_test2,y_predict)*100
print('Accuracy of ADA Boost classifier is ')
print(100*accuracy_score(y_test2,y_predict))
pd.DataFrame(
confusion_matrix(y_test2, y_predict),
columns=['Predicted Healthy', 'Predicted Alzheimers','Predicted Converted'],
index=['True Healthy', 'True alzheimers','True converted']
)
###Output
Accuracy of ADA Boost classifier is
90.42553191489363
###Markdown
Observations:- Yields higher accuracy when the algorithm used is SAMME and not the default SAMME.R.- SAMME is a boosting algorithm which works better for multiclass classification, SAMME.R works is conventionally used for binary classification problems.- Accuracy greatly increases after using standardised data(From 50% to 90%). Using a Multilayered Perceptron ClassifierMultilayer perceptron classifier is a classifier based on the feedforward artificial neural network. MLPC consists of multiple layers of nodes. Each layer is fully connected to the next layer in the network. Nodes in the input layer represent the input data. All other nodes map inputs to outputs by a linear combination of the inputs with the node’s weights w and bias b and applying an activation function.We are using 3 hidden layers of nodes. The solver is used for weight optimization.
###Code
Image(url= "https://www.researchgate.net/profile/Mouhammd_Alkasassbeh/publication/309592737/figure/fig2/AS:423712664100865@1478032379613/MultiLayer-Perceptron-MLP-sturcture-334-MultiLayer-Perceptron-Classifier-MultiLayer.jpg")
from sklearn.neural_network import MLPClassifier
mlp = MLPClassifier(max_iter=500,solver='lbfgs',hidden_layer_sizes=(10,30,20),activation='tanh')
mlp.fit(X_train2,y_train2)
y_predict = mlp.predict(X_test2)
mlpscore = 100*accuracy_score(y_test2,y_predict)
print(mlpscore)
from sklearn.metrics import classification_report,confusion_matrix
pd.DataFrame(
confusion_matrix(y_test2, y_predict),
columns=['Predicted Healthy', 'Predicted Alzheimers','Predicted Converted'],
index=['True Healthy', 'True alzheimers','True converted']
)
###Output
85.1063829787234
###Markdown
Observations:- Performance greatly increased from 50% to 81.23% after using scaled data.- Accuracy remains unaffected on changing activation functions.- According to scikit learn documentation, the solver 'lbfgs' is more appropriate for smaller datasets compared to other solvers such as 'adam'. Using a Feed Forward Deep Learning Neural Network[This Code was Adapted From: https://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/ Author: Jason Brownlee]The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.
###Code
Image(url= "https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/Architecture/images/feedforward.jpg")
###Output
_____no_output_____
###Markdown
- Multi-class labels need to be converted to binary labels(belong or does not belong to the class). LabelBinarizer makes this process easy with the transform method. At prediction time, one assigns the class for which the corresponding model gave the greatest confidence.
###Code
lb = LabelBinarizer()
y_train3 =lb.fit_transform(y_train2)
###Output
_____no_output_____
###Markdown
- The Keras library provides a convenient wrapper for deep learning models to be used as classification or regression estimators in scikit-learn. - The KerasClassifier class in Keras take an argument build_fn which is the name of the function to call to get your model. You must define a function that defines your model, compiles it and returns it.
###Code
def baseline_model():
classifier = Sequential()
# Adding the input layer and the first hidden layer
classifier.add(Dense(activation = 'relu', input_dim = 8, units = 8, kernel_initializer = 'uniform'))
# Adding the second hidden layer
classifier.add(Dense( activation = 'relu', units = 15, kernel_initializer = 'uniform'))
# Adding the third hidden layer
# Adding the output layer
classifier.add(Dense(activation = 'sigmoid', units = 3, kernel_initializer = 'uniform' ))
# Compiling the ANN
classifier.compile(optimizer = 'adamax', loss = 'categorical_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
return classifier
###Output
_____no_output_____
###Markdown
- In the example below, it is called "baseline_model". We pass this function name to the KerasClassifier.
###Code
estimator = KerasClassifier(build_fn=baseline_model, epochs=150, batch_size=5, verbose=0)
###Output
_____no_output_____
###Markdown
- The model is automatically bundled up and passed on to the fit() function which is called internally by the KerasClassifier class.
###Code
estimator.fit(X_train2, y_train2)
y_predict = estimator.predict(X_test2)
ffdnscore = 100*accuracy_score(y_test2,y_predict)
ffdnscore
pd.DataFrame(
confusion_matrix(y_test2, y_predict),
columns=['Predicted Healthy', 'Predicted Alzheimers','Predicted Converted'],
index=['True Healthy', 'True alzheimers','True converted']
)
###Output
_____no_output_____
###Markdown
Observations:- Using the Adamax optimizer we obtain the highest accuracy.- We start with the input layer, followed by two hidden layers with relu activation functions.- The output layer is added and the model is compiled. Comparing our classification modelsWe have run all five classifiers and obtained the accuracies for each, we will attempt to visaulize the acccuracies to determine the best possible classifier for predicting Alzheimer's disease.
###Code
scorearray = [svcscore,nbscore,decscore,knscore,rfscore,abcscore,mlpscore,ffdnscore]
score_arr = [{'Classifier':'SVC','Accuracy':svcscore},
{'Classifier':'NB','Accuracy':nbscore},
{'Classifier':'DEC','Accuracy':decscore},
{'Classifier':'KNN','Accuracy':knscore},
{'Classifier':'RF','Accuracy':rfscore}
,{'Classifier':'ABC','Accuracy':abcscore},
{'Classifier':'MLP','Accuracy':mlpscore},
{'Classifier':'FFDN','Accuracy':ffdnscore}]
score_df = pd.DataFrame(score_arr)
score_df = score_df.sort_values('Accuracy')
print(score_df)
sns.barplot(x="Classifier", y="Accuracy", data=score_df,palette='Reds');
plt.show()
###Output
Accuracy Classifier
2 77.659574 DEC
6 79.787234 MLP
3 88.297872 KNN
1 90.425532 NB
5 90.425532 ABC
7 90.425532 FFDN
0 92.553191 SVC
4 92.553191 RF
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.